title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Making Scalable Meta Learning Practical | Accept (poster) | Summary: The paper "Making Scalable Meta Learning Practical" introduces a novel approach called SAMA (Scalable Meta Learning with Arbitrary Optimizers) to address the scalability issues in meta learning. The authors combine advancements in implicit differentiation algorithms and systems to develop SAMA, which supports arbitrary optimizers in meta learning programs while reducing computational burden and utilizing efficient distributed training techniques. In experiments, they focus on data optimization tasks and demonstrate the effect of SMLA on several domains.
Strengths: 1 The paper proposes SAMA, a new method that combines implicit differentiation algorithms and systems to address scalability issues in meta-learning. This innovative approach sets it apart from existing methods.
2. The paper presents compelling experimental results, showing significant throughput increases and memory consumption reductions on single- and multi-GPU setups compared to baseline meta-learning algorithms. These performance improvements validate the effectiveness of SAMA.
3. Well written. This paper is well-written and easy to follow.
Weaknesses: -- Need more clarification on the application scenarios.
1. This paper claims that the proposed SAMA makes scalable meta-learning practical while the experiments are only carried out in data optimization applications. However, technically, the method should also be applicable to other applications of meta-learning. In practice, whether the method proposed in this paper is effective for other applications?
2. There are a large number of meta-learning algorithms, some of which may not involve the computation of a large-scale Jacobian matrix. Does the proposed method work for such algorithms? The application scope should be better clarified.
-- Unclear description
3. How is \theta^* obtained? Is it approximated by several steps or does it need to be trained to converge, and if it needs to be trained to converge, does the meta-learning process require more training time? Is such overhead already taken into consideration in the evaluation?
-- Typos and errors
4. There are some spelling errors in this paper that need further correction. Line 259 wrong subscript: the second “ft” should be “pt”. In Equation 5, The second equal sign should also be an approximate equal sign?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the positive review as well as the helpful feedback. Here, we address each of the questions and comments that you raise.
### **Clarification on the application scenarios**
> **Q1.** This paper claims that the proposed SAMA makes scalable meta-learning practical while the experiments are only carried out in data optimization applications. …. In practice, whether the method proposed in this paper is effective for other applications?
**A.** As the reviewer noted, SAMA can be utilized in other meta learning applications that can be formulated as bilevel optimization, such as few-shot learning, neural architecture search, and hyperparameter optimization. As mentioned in line 209, we also performed the traditional (MAML-like) few-shot learning experiments in Appendix D. While most existing few-shot learning works focus on developing new algorithms given the fixed network size, in our experiment we rather study the question of “can we improve the performance of few-shot learning by increasing the model size given the memory/compute efficiency of SAMA?”, to which our preliminary answer is “yes”.
> **Q2.** There are a large number of meta-learning algorithms, some of which may not involve the computation of a large-scale Jacobian matrix. Does the proposed method work for such algorithms?
**A.** We believe several meta-learning algorithms that avoid the use of a large Jacobian matrix (e.g. Reptile [1]) are specifically designed for few-shot learning, which is one particular application of meta-learning/bilevel optimization. Therefore, it is oftentimes not straightforward to apply these algorithms to a more general class of meta learning applications (e.g. data optimization, neural architecture search) that can be formulated as bilevel optimization. In contrast, SAMA is directly derived from the bilevel optimization formulation of meta learning, and thereby demonstrates a more wide applicability.
As for applying some components of SAMA (e.g. Sec 3.3: communication optimization) to other meta learning algorithms, we believe that it is possible. For example, our communication optimization strategy can be easily transferred to DARTS [2]. However, as each meta learning algorithm has a different design, our components may need to be additionally adapted to the target algorithm.
[1] Nichol et al., On first-order meta-learning algorithms. Arxiv, 2018.
[2] Liu et al., Darts: Differentiable architecture search. ICLR, 2018.
### **Unclear description**
> **Q3.** How is $\theta^*$ obtained?
**A.** $\theta^*$ is approximated by several unroll updates instead of by training to convergence. As the reviewer expected, by avoiding full training, we were able to significantly reduce the computation burden (refer to line 97-100). All our experiments are conducted with this several-updates-approximation strategy for a fair comparison. More experiment details including the number of unroll steps are provided in Appendix B.
### **Typos and errors**
> **Q4.** There are some spelling errors in this paper that need further correction
**A.** Thanks for pointing out typos. We will update these in the camera ready version of our paper.
We again express our gratitude for your constructive comments, which are very helpful to our paper. If you have any further comments that could make our paper stronger, we are more than happy to discuss them in the remaining review period. | Summary: This paper proposes a novel framework that could achieve scalable meta-learning algorithms from the perspectives of algorithms and systems. For algorithms, some approximations are proposed for base Jacobian inverse and adaptive optimizers; for systems, it implements the distributed algorithms to ensure different tasks can be done in parallel. Through extensive experiments in scalable meta-learning experiments, the proposed approach shows advantages over other baselines.
Strengths: 1. The motivation is clear and robust. Scalable meta-learning is critical, and this paper might be a good solution.
2. The presentation is clear, although some technical issues are unclear. Generally, the presentation is good.
3. Different experiments regarding scalable meta-learning are conducted, which could be helpful to get some insights for the common meta-learning/few-shot learning researchers.
Weaknesses: 1. Although the current experiments are very helpful in exploring scalable experiments under meta-learning, some important ablation study ones are missing.
- 1) Three issues (section 3) are solved to ensure the scalability of meta-learning. Then which issue is the major one to slow down the process? How could these three issues affect the scalabilities? For example, for the DDP, is that possible to conduct some experiments of DDP on the top of MAML? Most researchers are familiar with MAML, and if DDP experiments on MAML show improved efficiency on common few-shot learning experiments, that would be very helpful and easy to understand the benefits. Similarly, for the other two components or issues.
- 2) Is that possible to conduct more experiments to compare the current approach with others, such as iMAML, see Figure 2 in the iMAML paper. It is hard to compare the proposed approach with others based on the current experiments. It is better to compare them based on the published experiments.
- MAML: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks, ICML 2017
- iMAML: Meta-Learning with Implicit Gradients, NeurIPS 2019
2. It's straightforward for section 3.3, but it is a little bit difficult to follow for sections 3.1 and 3.2.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. ablation study experiments (see weakness)
2. how to understand u in section 3.1?
3. data reweighting method in 4.1: what is the relationship between w and $\lambda_r$, c and $\lambda_c$? Are w and c networks? Also, it is helpful to cite similar papers here:
- Learning to Reweight Examples for Robust Deep Learning, ICML 2018
- A Nested Bi-level Optimization Framework for Robust Few Shot Learning, AAAI 2022
4. Some typos:
- 1) Eq.5: the negative sign "-" is missing.
- 2) line 259: the second $L_{ft}$ should be ${L}_{pt}$.
- Also, it might be easy to follow if $\lambda$ is included in the first equation below line 69.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the valuable review. We try our best to address your concerns and questions here and in the **global response**.
### **Ablation study**
> **Q.** Three issues (section 3) are solved to ensure the scalability of meta-learning. Which issue is the major one to slow down the process? How do these three issues affect scalability?
**A.** Although we didn’t explicitly frame it as an “ablation study”, the effectiveness of each component in SAMA can be understood from Table 1 & 2 in our paper. Below, we directly discuss each component of SAMA based on Table 1 & 2. We also provide a unified table as an ablation study, in the global response above.
**Base Jacobian inverse**
*tl;dr* Identity approximation of Base Jacobian significantly improves memory/compute efficiency (Table 2).
Our baseline algorithms in Table 2 and Figure 1, Neumann and CG, are both state-of-the-art implicit differentiation meta learning algorithms that attempt to approximate base Jacobian inverse as accurately as possible with multiple Hessian-vector products, instead of approximating it with the identity as SAMA. As shown in Table 2, due to expensive second-order gradient computations involved in Hessian-vector products, these methods demonstrate much poorer throughput and GPU memory usage than SAMA, both of which are the major bottlenecks in efficiently scaling meta learning. In our ablation study in the global response, we additionally demonstrate that this identity approximation also has a minimal impact on accuracy as well by comparing SAMA against Neumann and CG in terms of accuracy.
**Algorithmic adaptation for adaptive optimizer**
*tl;dr* Algorithmic adaptation significantly improves accuracy (Table 1) at the minimal memory/compute cost (Table 2).
The main goal of this work is to devise a *(1) memory/compute efficient* meta learning algorithm that *(2) achieves good performance/accuracy*. In Table 1, SAMA consistently achieves better accuracy than SAMA-NA, which lacks algorithmic adaptation. Moreover, Table 2 shows that SAMA achieves a comparable memory/compute efficiency as SAMA-NA.
**Distributed training**
*tl;dr* Both GPU memory usage and throughput improve consistently as computations are distributed across more GPUs (Table 2).
We agree that it may not be straightforward for readers to clearly understand the benefit of each component when the results are spread across two separate tables. Hence, in the global comment above, we provide a unified table for Wrench experiments, including a comparison with state-of-the-art meta-learning baselines.
> **Q.** For DDP, is it possible to conduct some experiments of DDP on top of MAML?
**A.** We first want to kindly remind the reviewer that we included (MAML-like) few-shot learning experiments in Appendix D (line 209). While we didn’t study the exact problem you mention, we investigated another interesting question of “can we improve the few-shot learning performance by increasing the model size given SAMA’s improved scalability?”, to which our preliminary answer is “yes”. This question is quite different from a majority of previous works where the model size is fixed and authors attempt to improve few-shot accuracy by designing new algorithms.
As for the applicability of DDP to MAML, we will separately discuss two different aspects of MAML: 1) application and 2) algorithm. More specifically, MAML solves the *few-shot learning* application of meta-learning with the *iterative differentiation* algorithm.
- Application: We expect DDP will have a limited impact on few-shot learning applications. DDP improves compute/memory efficiency by distributing samples in a mini-batch across multiple GPUs. However, by the nature of few-shot learning, this application usually has a very small batch size. Thus, distributing such a small batch across multiple GPUs would likely lead to only a modest improvement in compute/memory efficiency. However, meta learning has a lot of other applications besides few-shot learning, such as data optimization and hyperparameter optimization, which don’t necessarily have a small batch size. We expect our DDP scheme will have a meaningful impact on these applications.
- Algorithm: We believe iterative differentiation in MAML would show a reduced compatibility with DDP compared to SAMA. As stated in Sec 3.1, the first-order nature (no Hessian computation) is crucial in achieving the improved DDP compatibility, but the iterative differentiation algorithm usually involves Hessian computation.
### **Clarity**
> **Q.** It's straightforward for section 3.3, but it is a little bit difficult to follow for sections 3.1 and 3.2.
**A.** While we are unable to edit the manuscript during the review period this year, we will improve our writing and the overall clarity in the camera ready version of our paper.
> **Q.** How to understand $u$ in section 3.1?
**A.** As mentioned in Eq. (2) (line 96), $u$ is the update function of gradient-based optimization. Here, we provide concrete examples of $u$ for several popular optimizers.
- SGD: $u_t = g_t = g(\theta_t;\lambda_t)\quad$
- SGD-M: $u = \beta m_{t-1} + g_t\quad$
- Adam: $u = \frac{\beta_1 m_{t-1} + (1 - \beta_1) g_t}{\sqrt{\beta_2 v_{t-1} + (1 - \beta_2 g_t^2)}}$
> **Q.** Data reweighting method in 4.1: what is the relationship between $w$ and $\lambda_r$, $c$ and $\lambda_c$? Are w and c networks? Also, it is helpful to cite similar papers.
**A.** $w/c$ are respectively reweighting and label-correction functions (i.e. neural networks) parameterized by $\lambda_r/\lambda_c$. We cited a few relevant data reweighting and label-correction works in line 219, and will also add papers you suggested.
> **Q.** Typos
**A.** Thanks for pointing them out. We will fix them in the revision.
We hope our response resolved most of your concerns, and helped you evaluate our work more positively. If you have other comments, we are more than happy to address them in the remaining review period.
---
Rebuttal Comment 1.1:
Title: post rebuttal comment
Comment: Thank authors' efforts for the rebuttal. And the response addresses most of my concerns. Please update the final version based on the response, especially for some clarification. For batch size of few shot learning, although the size is small, how about the batch size of tasks? Can we distribute tasks in parallel? Maybe this is also one potential solution? I raised my rating by 1.
---
Reply to Comment 1.1.1:
Title: Thanks for the response
Comment: We are glad that our rebuttal successfully addressed your concerns, and appreciate the score increase. Following the suggestion, we will incorporate this additional information in our final revision.
**Task parallelism**
Thanks for suggesting a very interesting idea. We believe it could potentially be achieved with some changes in implementation details (e.g. disabling DDP for base-level problems, each of which represents one task, as we don’t want to synchronize gradients across different tasks). Given that most MAML implementations handle each task sequentially, we don’t think such task parallelism will reduce the GPU memory usage, but it could still significantly improve the overall throughput (i.e. training speed). We will further investigate this in our future work. | Summary: The authors explore the issues impacting the scalability of Gradient-based Meta-Learning (GBML), including high memory/compute costs, algorithmic instability, and poor support for distributed training. The causes identified are: the base Jacobian inversion, the absence of algorithmic adaptation for adaptive optimizers, and the requirement for a custom backward pass of meta gradient computation. Proposed solutions include: approximating the base Jacobian with an identity matrix, expanding the meta Jacobian through the chain rule, and creating a communication strategy leveraging the communication-computation overlap trick.
Strengths: 1. The application of meta-learning (bi-level optimization) is widespread in various aspects of deep learning, and considering the acceleration of meta-learning is an important direction.
2. Validating the effectiveness of meta-learning (bi-level optimization) on large-scale datasets and models is important, which can verify the ability of meta-learning to solve practical problems.
3. The manuscript is written quite clearly and is easy to read.
Weaknesses: 1. In the methodology section, the explanation of why an identity matrix can be used to approximate the base Jacobian is unclear and lacks necessary analysis. As for approximating the second-order derivative with the first-order derivative, the method is almost the same as DARTS. Distributed training naturally fits in a scenario where only the first-order derivative is used, and the authors did not provide any special design for it.
2. In the experiment section, the authors should focus on comparing the difference in training efficiency between the meta-learning acceleration framework proposed in this paper and other approximation methods. However, this is only reflected in Table 2, and there is no comparison with the approximation method proposed by DARTS, which is the method most closely related to this paper.
3. It would be beneficial for this work to provide some analysis demonstrating the distance of each approximate solution from the optimal solution, as this could provide more valuable guidance.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Can you further explain why an identity matrix can be used to approximate the base Jacobian?
2. What is the innovative point of the distributed training proposed in this paper? What are the main benefits brought about by this innovation?
3. Can you provide some analysis demonstrating the distance of each approximate solution from the optimal solution?
Please see Weaknesses for details.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Please see Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the valuable feedback that will improve the quality of our work. We attempt to clarify and address your concerns regarding our work here and in the **global response**.
### **Comparison with DARTS**
While we recognize the similarity, SAMA is different from DARTS in two major aspects.
1. Due to a lack of algorithmic adaptation in DARTS, SAMA and DARTS differ when an adaptive optimizer is used at the base level. As we pointed out in the paper, recent large models are by default trained with adaptive optimizers, and we empirically demonstrate in Table 1 that lacking this adaptation often leads to noticeable performance degradations. When SGD is used as a base optimizer, SAMA and DARTS become more similar, but they still have another important difference which we discuss in the next paragraph.
2. As stated in line 138-139, DARTS computes a meta Jacobian at initialization $\theta$, while SAMA computes it at (approximate) convergence $\theta^*$. Therefore, DARTS is closer to iterative differentiation while SAMA is implicit differentiation. This difference has two important implications in both memory/compute efficiency for scalable meta learning.
- Memory: DARTS needs to keep the copies of both the initial parameter $\theta$ and the most recent parameter $\theta^*$ while SAMA only requires tracking $\theta^*$ (reference: the official implementation of DARTS, which indeed separately saves the copy of $\theta$). Therefore, DARTS incurs additional memory usage, worsening the memory bottleneck issue in scalable meta learning.
- Compute: Compared to DARTS, SAMA more naturally allows for a larger unroll step, which essentially reduces the frequency of the meta-gradient computation, the most expensive operation in meta-learning. Given that the meta-objective is evaluated at the optimal base solution $\theta^*$, the quality of meta-gradient naturally depends on the quality of $\theta^*$ approximation. However, DARTS computes the meta-gradient at $\theta$, and therefore the approximation error of $\theta^*$ increases as we use larger unroll steps. Indeed, the original DARTS paper only used an unroll step of 1, while we were able to use an unroll step of 10 for our Transformer experiments (Sec 4.1 & 4.2). This significantly improves the overall training efficiency (i.e. throughput) of meta learning.
To more clearly understand these benefits, we provide a quantitative analysis of memory/computation efficiency between SAMA and DARTS in an additional ablation study. The result can be found in Table 1 & 2 in the global response.
### **Justification of identity approximation**
As shown in [17], approximating base Jacobian as identity can be understood as preconditioning meta-gradient. In other words, given the optimization path from $\theta$ to $\theta^*$, the identity base Jacobian trick in SAMA essentially approximates this optimization path with the *reverse* one step update (i.e. gradient ascent) from $\theta^*$. Similarly, in DARTS, which also adopts the identity base Jacobian trick, it approximates the optimization path with the one step update (i.e. gradient descent) from $\theta$.
Following your suggestion, we additionally analyzed the effect of this approximate solution (i.e. the distance of the approximate solution obtained by SAMA from the optimal solution) on the meta-gradient computation and the final optimal meta solution ($\lambda^*$) in the “biased regression” setting, where the closed-form solution can be obtained analytically. In this experiment, we empirically show that the identity approximation still allows for the accurate estimation of meta-gradients and the optimal meta solution $\lambda^*$, even when the true base Jacobian is not an identity matrix. More detailed discussion is provided in the global response.
[17] Fung et al., Jfb: Jacobian-free backpropagation for implicit networks. AAAI, 2022.
### **Benefits of SAMA’s distributed training scheme**
To the best of our knowledge, our work is first to raise the problem of the communication efficiency in distributed meta learning, and propose an initial solution. Given recent advances in hardware, the communication overhead can quickly become a bottleneck for training efficiency in large-scale learning [31]. As stated in Sec 3.3, one meta gradient computation in SAMA involves 3 backward computations. If implemented naively, gradient synchronization can happen after each backward computation, whereas SAMA performs synchronization only once after the last backward computation. This can roughly reduce the communication cost by three times. In addition, most meta learning implementations including DARTS heavily rely on `torch.autograd.grad` instead of `torch.autograd.backward`. Unfortunately, autograd.grad doesn’t support communication optimization such as a communication-computation overlap, which is essential in large-scale learning as shown in [31]. In summary, in addition to simply applying distributed training to our first-order SAMA algorithm, we perform additional communication optimizations that (1) reduce communication cost by 3 times through sparse gradient synchronization, and (2) hide the remaining communication cost behind the computation through the mixed use of autograd.grad and autograd.backward. In doing so, we were able to maximize training efficiency of distributed meta-learning.
[31] Li et al., Pytorch distributed: Experiences on accelerating data parallel training. VLDB, 2021.
We hope our response resolved most of your concerns, and helped you evaluate our work more positively. If you have other comments, we are more than happy to address them in the reviewer-author discussion period.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. This reply provided additional information, and I am willing to raise my score.
---
Reply to Comment 1.1.1:
Title: Thanks for the response
Comment: We are glad that our rebuttal alleviated your concerns. We will incorporate these additional information into our final revision. Thanks again for your time and effort! | Summary: This paper tries to scale current meta learning algorithm and make scalable meta learning practical. Specially, the authors propose a novel algorithm SAMA from the perspective of algorithm and system, which can support arbitrary optimizers in the base level of meta learning and reduce the computation cost. The experimental results illustrate that the proposed method SAMA can improve the throughput and meanwhile reduce the computation cost in large-scale benchmarks. For example, they try to evaluate the performance of SAMA on text classification with large language models, such as BERT and RoBERTa.
Strengths: This paper focus on great direction and how to scale meta learning is very important, especially when large language models are so popular now.
The proposed method is also easy to follow, especially the authors try to solve the problem from the perspectives of algorithm and system.
Weaknesses: This paper mainly focus on scaling and maybe you should provide more experimemtal analysis about scaling, such as from small mode to large model, from 1 gpu to more gpus. Maybe you can provide more results of your proposed method and baseline about scaling.
The most experiments concentrate on nlp tasks. However, we know there are also dome large models on other tasks, such as computer vision. Therefore, maybe you should provide more results on other tasks to illustrate the generality of SAMA.
I think the contribution from system is a little weak and the workflow can be directly implemented with original PyTorch.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: In section 3.1 and 3.2, you provide two solutions about two important problems. I would like to ask do you try to compare your proposed method with the related works that also try to solve these problems?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our work, and for the useful feedback. We address the comments and questions raised in your review below and in the **global response**.
### **Additional scalability analysis**
> **Q.** This paper mainly focus on scaling and maybe you should provide more experimental analysis about scaling, such as from small mode to large model, from 1 gpu to more gpus. Maybe you can provide more results of your proposed method and baseline about scaling.
**A.** We agree with the value of these experiments, and want to highlight the three scalability analyses in our paper — Table 2 & Figure 1 (bottom left): memory/throughput analysis on the noisy finetuning task, Figure 1 (bottom right): memory vs model size analysis on the continued pretraining task, and Figure 4 (Appendix D): scale-accuracy analysis on the image few-shot learning task. Given your suggestion, we additionally performed a more extensive ablation study on two datasets from the Wrench benchmark, and presented the results in the global response. From the results, you can clearly see the effects/benefits of each component in SAMA for scalable meta-learning.
### **Other domains than NLP**
> **Q.** The most experiments concentrate on nlp tasks. However, we know there are also some large models on other tasks, such as computer vision. Therefore, maybe you should provide more results on other tasks to illustrate the generality of SAMA.
**A.** We want to kindly remind the reviewer that we included two computer vision experiments, namely 1) data pruning on ImageNet/CIFAR datasets and 2) (MAML-like) few-shot image classification, respectively in Sec. 4.3 and Appendix D of our paper. With these two experiments, we tried our best to demonstrate the generality of SAMA across different domains. We admit that there are a lot of other interesting applications and domains to explore, though in this paper we aimed to prioritize the best subset of experiments given the 9-page space limit.
### **Distributed training**
> **Q.** I think the contribution from system is a little weak and the workflow can be directly implemented with original PyTorch.
**A.** To the best of our knowledge, our work is the first to raise the problem of efficient distributed meta learning as well as provides an initial solution to it. While our distributed training solution may look simple in retrospect, we noticed that most existing works on meta learning are limited to a single-GPU setup. For the remaining few works that combine distributed training and meta learning, we found that existing implementations are either incorrect (e.g. no proper gradient synchronization) or do not provide communication optimization.
In contrast, as stated in Sec 3.3, our DDP strategy (1) reduces communication cost by 3 times through sparse gradient synchronization, and (2) hides the remaining communication cost behind the computation through our simple yet smart implementation trick.
If you have any other comments that could make our paper stronger, we are more than happy to discuss them in the remaining review period. Thanks again for your reviewing effort! | Rebuttal 1:
Rebuttal: We first want to express our gratitude to all reviewers for their reviewing efforts. In our global response, we address two issues raised by reviewers: 1) Ablation study and 2) (empirical) justification of the identity base Jacobian approximation.
### **Ablation Study & SOTA comparison**
While the effectiveness of each component in SAMA (i.e. base Jacobian inverse, algorithmic adaptation for the adaptive optimizer, and efficient distributed training) can be collectively understood from Table 1 & 2, several reviewers asked for a more direct ablation study. Therefore, we provide below a unified table for the ablation study. In detail, our experiment settings are:
- Datasets: 1) AGNews and 2) IMDB from the Wrench benchmark (Sec 4.1)
- Baselines: 1) fine-tuning (no meta-learning), 2) iterative differentiation (e.g. MAML), 3) conjugate gradient (e.g. iMAML), 4) Neumann series, 5) DARTS, 6) SAMA-NA (no algorithmic adaptation)
**Table 1. Ablation results on AGNews**
| | Base Jacobian | Algo Adapt | Distributed | Accuracy | Throughput | Memory |
|:----------:|:---------:|:-------:|:-------:|:------:|:-------:|:-----:|
| Finetuning (Baseline) | x | x | x | 85.79 | 169.16 | 7.77 |
| Iterative Differentiation (MAML) | x | x | x | 85.78 | 28.07 | 22.94 |
| Conjugate gradient (iMAML) | x | x | x | 86.78 | 65.14 | 22.03 |
| Neumann series | x | x | x | 86.65 | 67.03 | 19.70 |
| DARTS | o | x | x | 86.36 | 43.69 | 10.81 |
| SAMA-NA | o | x | x | 86.55 | 137.90 | 10.30 |
| SAMA | o | o | x | **89.05** | 134.56 | 11.12 |
| SAMA (2 GPUs) | o | o | o | **88.85** | 226.27 | 8.00 |
| SAMA (4 GPUs) | o | o | o | **89.02** | **298.28** | **6.46** |
**Table 2. Ablation results on IMDB**
| | Base Jacobian | Algo Adapt | Distributed | Accuracy | Throughput | Memory |
|:--------:|:---------:|:------:|:--------:|:------:|:-------:|:-----:|
| Finetuning (Baseline) | x | x | x | 78.16 | 144.39 | 6.60 |
| Iterative Differentiation (MAML) | x | x | x | 80.25 | 24.24 | 22.03 |
| Conjugate gradient (iMAML) | x | x | x | 81.01 | 56.27 | 21.92 |
| Neumann series | x | x | x | 79.92 | 57.85 | 19.75 |
| DARTS | o | x | x | 80.47 | 37.53 | 10.35 |
| SAMA-NA | o | x | x | 81.92 | 117.86 | 9.93 |
| SAMA | o | o | x | **84.31** | 116.94 | 10.84 |
| SAMA (2 GPUs) | o | o | o | **85.18** | 196.48 | 7.84 |
| SAMA (4 GPUs) | o | o | o | **84.19** | **263.74** | **6.39** |
From our extended ablation study, it can be seen that 1) an identity approximation of base Jacobian significantly improves memory/compute efficiency, 2) algorithmic adaptation improves meta learning performance at the minimal compute/memory cost, and 3) our communication-optimized distributed training further improves compute/memory efficiency.
### **Analysis on Identity Approximation of the Base Jacobian**
As obtaining the closed form solution of the Hessian is impossible in almost all deep learning problems, we study the soundness of the identity approximation of base Jacobian in the simpler “biased regression” setting [1], for which the bilevel optimization formulation is as follows:
$$
\lambda^* = \arg\min_{\lambda} \Vert X’w^*(\lambda) - y’\Vert^2
$$
$$
w^*(\lambda) = \arg\min_w \Vert Xw - y\Vert^2 + \beta\Vert w - \lambda \Vert^2
$$
Given the above formulation, the closed-form solutions for the base Jacobian, the meta-gradient $g_{\lambda}$, and the optimal meta solution $\lambda^*$ are:
- base Jacobian $= X^TX + \beta I$
- $g_{\lambda} = \beta (X^TX + \beta I)^{-1}(X’^TX’w^* - X’^Ty’)$, where $w^* = (X^TX + \beta I)^{-1}(X^Ty + \beta\lambda)$
- $\lambda^* = (A^TA)^{-1}A^Tb$, where $A=\beta X’(X^TX + \beta I)^{-1}), b=y’ - X’(X^TX + \beta I)^{-1}X^Ty$
We set $\beta=0.1$ and perform 100 meta updates, and measure 1) the cosine similarity between the ground truth $g_{\lambda}$ and the meta gradient $g_{approx}$ obtained with our approximation, and 2) the L2 distance between the current meta parameter $\lambda_t$ and the optimal solution $\lambda^*$ at each time step $t$. For a more thorough analysis, we also compute these two metrics for other meta gradient algorithms that explicitly approximate base Jacobian inverse with conjugate gradient and Neumann series. In the table below, we provide the metric obtained from several different time steps. *The visual plots for all time steps are provided in the attached pdf file at the bottom of this response.*
**Table 3.** $cos(g_{\lambda}, g_{approx})$ **result**
| | t=0 | t=10 | t=20 | t=50 | t=100 |
|---------|--------|--------|--------|--------|--------|
| CG | 0.9995 | 0.9994 | 0.9989 | 0.9997 | 0.9319 |
| Neumann | 0.9957 | 0.9952 | 0.9949 | 0.9948 | 0.9363 |
| SAMA | 0.9843 | 0.9818 | 0.9815 | 0.9861 | 0.9321 |
**Table 4.** $\Vert \lambda^* - \lambda_t\Vert_2$ **result**
| | t=0 | t=10 | t=20 | t=50 | t=100 |
|---------|--------|--------|--------|--------|--------|
| CG | 3.6752 | 1.8959 | 0.9507 | 0.1953 | 0.0160 |
| Neumann | 3.6972 | 2.6116 | 1.3468 | 0.2823 | 0.0216 |
| SAMA | 3.6856 | 2.1537 | 0.6952 | 0.1966 | 0.0184 |
From Table 3 & 4 (and plots in the attached PDF), we can clearly see that 1) while slightly less accurate than second-order algorithms like CG, SAMA still achieves a high directional alignment with the ground truth meta-gradient, and 2) SAMA also achieves a stable convergence to the optimal solution at a comparable speed. We hope this result corroborates the soundness of our identity approximation for the base Jacobian.
[1] Grazzi et al., On the Iteration Complexity of Hypergradient Computation. ICML, 2020.
Note: We used smaller $\beta$ than in the original paper (1 vs 0.1), to amplify the “non-identitiness” of the base Jacobian.
We will include all of the above results in the camera ready version of our paper. If reviewers have any further questions regarding these additional experiments, we are happy to discuss them during the author-reviewer discussion period. Thanks again!
Pdf: /pdf/892d681a20c58431815405324d7392ca0b26d5aa.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper addresses the challenges of scalability in meta learning by introducing SAMA, a novel approach that combines advancements in implicit differentiation algorithms and systems. SAMA demonstrates improvements in computational efficiency and memory consumption compared to other baseline algorithms, and it showcases practical applicability in language and vision domains through experiments with large language models and image classification tasks.
Strengths: 1. Originality: The paper introduces a novel approach to address the scalability challenges of meta learning. By combining advancements in implicit differentiation algorithms and systems, it proposes a solution that supports arbitrary optimizers in meta learning while reducing computational burden.
2. Quality: The paper demonstrates a level of quality in terms of the overall idea and experimental evaluation. By evaluating their method on a few benchmarks, including language models and image classification tasks, the authors provide a good assessment of its performance.
3. Clarity: The key contributions, such as the introduction of SAMA and its evaluation of various benchmarks, are presented in a concise manner. The background and related work sections also help in properly positioning the paper in the literature.
4. Significance: The paper addresses the long-standing challenge of scalability in meta learning. It is able to tackle a variety of high-dimensional inductive biases of large-scale learning - for instance in optimizing large language models as a very relevant open problem to the community.
Weaknesses: In general, the main weaknesses that I noticed are in the design of the experimental section:
1. Comparison with State-of-the-Art: The paper could benefit from a more comprehensive comparison with state-of-the-art meta learning approaches. Is it possible to compare it to well-established meta-learning algorithms, such as MAML?
2. Lack of ablations: More importantly, the paper seems to lack an ablation study. Including such comparisons would provide a more thorough understanding of the model's effectiveness and highlight more clearly its advantages over existing methods. It would be appreciated if the authors identify separate components of their method and ablate them w.r.t the full method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would appreciate it if the authors address the following questions:
1. How does the inconsistency between the assumed vanilla SGD optimizer and the actual optimizer used in large models affect the computation of the meta gradient? Could you provide more insights into the inconsistencies and their consequences in terms of training instabilities and reduced performance in meta learning?
2. The paper mentions that solutions proposed in data-centric AI works to improve training data quality often rely on hand-designed heuristics. Could you elaborate on the specific limitations or drawbacks of existing approaches based on hand-designed heuristics? How does the proposed meta learning approach address or overcome these limitations?
3. In section 4.1, are there any specific techniques or algorithms employed within the meta learning framework to optimize the noisy training data? How does the proposed approach leverage meta learning to adaptively update and improve the quality of the labels generated by weak labeling functions?
4. What are the key differences between vanilla SGD and adaptive optimizers like Adam in terms of their impact on the fixed point condition (mentioned in section 3.2 line 149)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have provided a sufficient discussion of the limitations of their work in section 6. It would be beneficial for the authors to further elaborate on these aspects, beyond language models, to ensure a comprehensive analysis of any potential societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive review and valuable comments. We strive to address concerns and questions that you raised below and in the **global response**.
### **Ablation Study & SOTA comparison**
Though we didn’t explicitly frame it as an “ablation study”, the effectiveness of each component in SAMA can be understood from Table 1 & 2 in our paper. Below, we directly discuss each component of SAMA based on Table 1 & 2. We also provide a unified ablation study result table in the global response.
**Base Jacobian inverse**
*tl;dr* Identity approximation of base Jacobian significantly improves memory/compute efficiency (Table 2).
Our baseline algorithms in Table 2 (and Figure 1), Neumann and CG, are both state-of-the-art implicit differentiation meta learning algorithms that attempt to approximate base Jacobian inverse as accurately as possible with multiple Hessian-vector product operations, instead of approximating it with an identity matrix as in SAMA. As shown in Table 2, due to an expensive second-order gradient computation involved in Hessian-vector products, these methods demonstrate much poorer throughput and GPU memory usage than SAMA, both of which are the major bottlenecks in scalable meta learning. In our global response, we additionally demonstrate that this identity approximation also has a minimal impact on accuracy by comparing SAMA against Neumann and CG in terms of accuracy.
**Algorithmic adaptation for adaptive optimizer**
*tl;dr* Algorithmic adaptation significantly improves accuracy (Table 1) at the minimal memory/compute cost (Table 2).
The main goal of this work is to devise a *(1) memory/compute efficient* meta learning algorithm that *(2) achieves good performance/accuracy*. In Table 1, SAMA consistently achieves better accuracy than SAMA-NA, which lacks algorithmic adaptation. Moreover, Table 2 shows that SAMA achieves a comparable memory/compute efficiency as SAMA-NA.
**Distributed training**
*tl;dr* Both GPU memory usage and throughput improve consistently as computations are distributed across more GPUs (Table 2).
We agree that it may not be straightforward for readers to clearly understand the effectiveness of each component when the results are spread across two separate tables. Hence, in the global response, we provide a unified table for Wrench experiments, including a comparison with state-of-the-art meta-learning baselines.
### **Questions**
> Q. How does the inconsistency between the assumed vanilla SGD optimizer and the actual optimizer used in large models affect the computation of the meta gradient? Could you provide more insights into the inconsistencies?
Roughly speaking, once base Jacobian is approximated as identity, the meta gradient formulation follows $g_{meta}=-\frac{\partial u}{\partial \lambda}\cdot\frac{\partial L_{meta}}{\partial \theta^*}=-\frac{\partial}{\partial \lambda}(u\cdot\frac{\partial L_{meta}}{\partial \theta^*})$. Thus, meta gradient descent essentially maximizes the inner product between the base update vector $u$ and meta gradient w.r.t base parameters $\theta$. This way, performing base updates with $u$ not only decreases the base loss, but also *maximally* decreases the meta loss. We note that the update direction of the base problem $u$ is dependent on its optimizer. Thus, we believe that reflecting this base optimizer choice would lead to improved meta learning performance in the end by better aligning $u$ and $\frac{\partial L_{meta}}{\partial \theta^*}$.
> Q. The paper mentions that solutions proposed in data-centric AI works to improve training data quality often rely on hand-designed heuristics.
Could you elaborate on the specific limitations of existing approaches based on hand-designed heuristics?
The benefits of meta-learning approaches to data-centric AI can be most clearly seen in our dataset pruning experiments (Sec 4.3). Specifically, heuristics-based methods (e.g. EL2N, forgetting) rank each training sample using some heuristics, such as forgetting counts and gradient norm at initialization, *in the hope* that these heuristics would indeed capture the importance weight of each sample. On the other hand, our meta-learning approach directly optimizes the importance weight of each sample in a way that the resulting model trained with these learned importance weights minimizes the original training loss. Indeed, our experiment results in Fig. 3 clearly shows that our meta-learning approach outperforms all heuristics-based methods on both small-/large-scale datasets.
> Q. In section 4.1, are there any specific techniques or algorithms employed within the meta learning framework to optimize noisy training data?
There are abundant meta-learning works for tackling various data issues (e.g. noisy labels, class imbalance). A few examples are Meta-Weight-Net [50], Learning-to-Reweight [48], and Meta-Label-Correction [60]. We adopt some of the architecture designs from these works, but replace the meta-gradient computation algorithm with SAMA, instead of iterative differentiation or truncated backpropagation, to improve the scalability.
> Q. What are the key differences between vanilla SGD and adaptive optimizers like Adam in terms of their impact on the fixed point condition?
Most gradient-based optimizers, including SGD and Adam, share the same fixed point condition of $\frac{\partial L_{base}}{\partial \theta^*} = 0$. However, most gradient-based meta learning in practice approximates $\theta^*$ with only a few gradient updates, and thus this “zero gradient” condition is unlikely met. Therefore, we hypothesize that we in reality need to pay more attention to the alignment between $u$ and $\frac{\partial L_{meta}}{\partial \theta}$ discussed above, where $u$ is dependent on the base optimizer.
We hope our response resolved most of your concerns, and helped you evaluate our work more positively. If you have other comments, we are happy to address them in the reviewer-author discussion period.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The concerns are mostly addressed, and I will keep my initial score.
---
Reply to Comment 1.1.1:
Comment: We are glad that most of your concerns are addressed. Thanks for your reviewing effort again. | null | null | null | null | null | null |
Provable Training for Graph Contrastive Learning | Accept (spotlight) | Summary: The goal of this paper is to investigate the properties of different nodes in GCL with different graph augmentations. The paper discovers the imbalanced training issue of GCL methods, and proposes the concept “node compactness”, measuring how each node follows the GCL principle. Finally, the paper proposes the node compactness regularization, and shows its effectiveness by combining it with existing GCL methods.
Strengths: - The paper is well motivated with both theoretical and empirical analysis.
- The paper provides some interesting insights on the node properties in GCL.
- The effectiveness of the proposed model is well demonstrated by the empirical results.
Weaknesses: - One of the motivations is that “how to distinguish these nodes”, while it seems that the paper doesn’t distinguish these nodes?
- Although the presentation is good enough, I’m not very clear with the whole training process because of the math-heavy part in the technique part. Do we need to compute the POT first, and then train InfoNCE loss? Or do we need to train the model with the two steps iteratively?
- In theorem 1, the conclusion is obtained under the condition that AXW+b, so if the GCN process is not AXW+b, can we still use POT regularization?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The discussion section of the paper is concise but unsatisfactory in addressing the main points.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the positive comments and valuable feedback from the reviewer. Below, we address the reviewer's concerns one by one, hoping that a better understanding of every point can be delivered.
1. > One of the motivations is that “how to distinguish these nodes”, while it seems that the paper doesn’t distinguish these nodes?
**Response**: Thanks. Specifically, the value of node compactness serves as a good metric to distinguish all the nodes in a graph by how much they are prone to be well-trained. It is a relative measure, instead of giving an explicit bound of discrimination. In Figure 3, nodes with different properties are distinguished by the node compactness, directly addressing this motivation.
2. > Although the presentation is good enough, I’m not very clear with the whole training process because of the math-heavy part in the technique part. Do we need to compute the POT first, and then train InfoNCE loss? Or do we need to train the model with the two steps iteratively?
**Response**: Sorry for the confusion. We describe the whole process of graph contrastive learning with the provable training as follows:
(1) Generate two augmented graphs $G_1$ and $G_2$ from $G$;
(2) Perform the forward pass and the node embeddings $Z_1$ and $Z_2$ are obtained. Compute the InfoNCE loss $\mathcal L_{\text{InfoNCE}}(Z_1. Z_2)$;
(3) Compute the POT loss as Algorithm 1 in our paper described;
(4) Combine the two losses as in Equation 5, then do the backward propagation to update the network parameters;
(5) Iteratively do steps (1)-(4) until stop.
In these steps, step (1)-(2) is the same as traditional GCL. We will improve the presentation of Algorithm 1 as above in the revision.
3. > In theorem 1, the conclusion is obtained under the condition that AXW+b, so if the GCN process is not AXW+b, can we still use POT regularization?
**Response**: Yes, POT is capable of various graph encoders. Since GCN is the most common encoder in GCL methods, we mainly provided the derivation of the form of node compactness for GCN. For other types of encoders, the form of node compactness can be derived similarly using Definition 5 and Theorem 1. For example, if the encoder is Graph Isomorphism Network, and its form is
$$
h_v^{(k+1)}=\text{MLP}((1+\epsilon)h_v^{(k)}+\sum_{u\in \mathcal N(v)}h_u^{(k)}).
$$
We can first relax the nonlinear activation functions in MLP with Definition 5, then obtain the pre-activation bounds with Theorem 1, and finally follow the steps in Appendix A.2 to derive the form of node compactness when a GIN encoder is applied.
We will add this discussion in the revision to improve the "Limitation" section.
4. > The discussion section of the paper is concise but unsatisfactory in addressing the main points.
**Response**: We have expanded the discussion of limitations. Please refer to the "global" rebuttal.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response and clairification. I appreciate the authors' effort, which have addressed my concerns and corrected my misunderstanding. I'm willing to increase my rating. | Summary: The paper considers an important problem in graph contrastive learning, i.e., the relationship between the node property with the graph augmentations. The main takeaway is that the training of GCL on different nodes is imbalanced, and the concept “node compactness” is introduced to guarantee the training of GCL is provable. This is demonstrated on eight benchmarks.
Strengths: S1. The paper topic is interesting.
S2. The theoretical foundation is solid.
S3. The paper is relatively well-written and well-organized.
Weaknesses: The experiments section could be strengthened. I’m not sure that whether the following experiment needs to be conducted. Specifically, the paper aims to discover the nodes that are not sensitive to different graph augmentations. However, there is no experiment to analyze this, i.e., maybe we can generate different graph augmentations to check whether some nodes are always well trained, or not well trained? Moreover, some notations are not described, e.g., [ ]+ and [ ]- in Theorem 1. Also, why the curve of traditional GCL methods decreases with the increase of Epoch in Fig.2?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the above weakness.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the precious time spent reading through the paper and giving constructive suggestions. To address the concerns, we clarify the experiment section and some notations as follows.
1. > The experiments section could be strengthened. I’m not sure whether the following experiment needs to be conducted. Specifically, the paper aims to discover the nodes that are not sensitive to different graph augmentations. However, there is no experiment to analyze this, i.e., maybe we can generate different graph augmentations to check whether some nodes are always well trained, or not well trained?
**Response**: We greatly thank the reviewer for this suggestion. To investigate whether our proposed node compactness can reflect the nodes' sensitivity to augmentations, we conduct an experiment to show the relationship between a node's compactness and the standard error of its InfoNCE loss under different graph augmentations. Specifically, we first train a GCL model, then fix the network parameters and sample 500 pairs of different augmentations. After encoding these augmentations, 500 InfoNCE loss values are obtained for each node. We choose to calculate the standard error of these 500 values of each node to reflect its sensitivity to different augmentations. We also compute each node's compactness as Theorem 2 in the paper, which is not related to specific augmentations. With the process above, we have a data sample (node_compactness, InfoNCE_std) for each node. We divide these samples into several bins by the value of node compactness and calculate the average of InfoNCE_std in each bin. Finally, we show the result as a scatter plot in Figure 2 in the rebuttal PDF. It can be seen that less sensitive nodes have higher node compactness values. To conclude, this experiment shows that our proposed node compactness can be a proxy of the node's sensitivity to different graph augmentations, as the motivation stated. I hope this experiment can make the paper more self-contained.
2. > Moreover, some notations are not described, e.g., [ ]+ and [ ]- in Theorem 1.
**Response**: Sorry for the confusion in the notations. Here $[X]\_+$ denotes $\max(X,0)$ and $[X]\_-$ denotes $\min(X,0)$. We will check the description of the notations in the revision thoroughly.
3. > Also, why the curve of traditional GCL methods decreases with the increase of Epoch in Fig.2?
**Response**: Thanks for pointing this out. We think the "false negative" issue may contribute to the drop in the curves in Figure 2. Since GCL uses all $N-1$ nodes as negative samples, there are many false negatives, i.e., some nodes with the same class as the anchor node are treated as negative samples in the contrastive loss. As in the definition, the node compactness measures the worst case of how well a node behaves across all possible augmentations. Therefore, as the epoch increases and some false negative information is captured, that "worst case" value may decrease. As a result, we observe a drop in node compactness in traditional GCL methods.
There are some existing methods alleviating this, called "hard negative mining", including the baseline method ProGCL. From Figure 2, we see that the curve of ProGCL almost does not drop, while the average node compactness of other methods decreases. Since our POT method explicitly optimizes the node compactness, the curve will continue to rise as the epoch increases.
It is an open question and may be related to fundamental issues in contrastive learning. We look forward to further discussions.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thanks for your response. I have no further questions. | Summary: Graph augmentation is a fundamental component for graph contrastive learning. When augmenting graph structures, how the change of structures affects the GCL is an interesting problem. In this work, the paper proposes the “node compactness” to describe the behavior of different nodes, i.e., whether there are some nodes consistently stable enough to the training process with different graph augmentations. With this concept, the paper designs a new POT regularization term as a plug-in, which enables the training process of nodes to follow the GCL principle, so as to improve the performance of GCL finally.
Strengths: 1. The paper studies an important and challenging problem. The idea of node compactness is novel.
2. Using bound propagation to derive node compactness in GCL is also interesting and technically sound.
3. Thorough experiments are conducted to validate how the proposed model works.
Weaknesses: 1. Some techniques are not introduced clearly (see comments below).
2. Some experimental results are also not clearly explained (see comments below).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. The first constraint in Definition 3 requires the graph augmentation is edge dropping. Does that mean the proposed provable training is only suitable for edge dropping?
2. The experiment analyzes the relationship between the node compactness with the properties of different augmentation strategies, while some statements are not well explained, which may be hard to understand. Specifically, why does a higher degree result in larger node compactness? Why the results in Fig.3 are reasonable? More details should be provided.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly thank the reviewer for your interest in our paper and constructive suggestions. To make further clarification on the techniques and experimental details, we respond to the reviewer's questions one by one. We look forward to assisting to have a better understanding of our paper.
1. > The first constraint in Definition 3 requires the graph augmentation is edge dropping. Does that mean the proposed provable training is only suitable for edge dropping?
**Response**: Thanks. We present the definitions and theorems under the condition of edge dropping since it is the most common topology augmentation in GCL[1, 2, 3]. The proposed POT is suitable for various augmentation strategies including edge dropping and adding. POT is applicable as long as the range of augmentation is well-defined in Definition 4. For example, if edge addition is allowed, the first constraint in Definition 3 can be removed, resulting in the change of the element-wise bound in Definition 4. After the element-wise bound is defined, the other computation steps are the same, then the POT loss with edge addition can be derived. Since edge addition is not a common practice and all the baselines use edge dropping only, we set the first constraint in Definition 3 to obtain a tighter bound. This discussion will be added to the revision.
2. > The experiment analyzes the relationship between the node compactness with the properties of different augmentation strategies, while some statements are not well explained, which may be hard to understand. Specifically, why does a higher degree result in larger node compactness? Why the results in Fig.3 are reasonable? More details should be provided.
**Response**: Sorry for the confusion. First, edges are randomly dropped with a uniform rate in GRACE. Since there are fewer unimportant edges around a low-degree node, those informative edges are more easily dropped, which prevents the contrastive loss from learning useful semantics, therefore those low-degree nodes are hard to be well-trained in GRACE. This is also the motivation of GCA. To alleviate this, GCA sets the dropping rate of edges by node centrality, keeping important edges around low-degree nodes. That's why we observe a high node compactness in low-degree nodes. However, the augmentation strategy of GCA puts a higher dropping rate for high-degree nodes. This may explain why node compactness drops as the degree increases at first. When the degree comes to relatively large, the node is "strong" enough for this augmentation strategy and is prone to be well-trained.
To conclude, the node degree and node compactness are positively correlated in GRACE, however, there is a trade-off between degree and node compactness in GCA. This may explain why the result in Figure 3 is reasonable.
References:
[1] Zhu, Yanqiao, et al. "Deep graph contrastive representation learning." arXiv preprint arXiv:2006.04131 (2020).
[2] Zhang, Hengrui, et al. "From canonical correlation analysis to self-supervised graph neural networks." Advances in Neural Information Processing Systems 34 (2021): 76-89.
[3] Thakoor, Shantanu, et al. "Bootstrapped representation learning on graphs." ICLR 2021 Workshop on Geometrical and Topological Representation Learning. 2021. | Summary: The paper aims at studying the node properties given different graph augmentations in graph contrastive learning. It has the following contributions. 1) It discovers the training of GCL methods is severely imbalanced. 2) It proposes a novel concept of “node compactness”, and the provable training for GCL with the concept. 3) Besides the theoretical analysis, the paper uses extensive numerical results to show the effectiveness of the proposed method. Generally, the paper is interesting and easy to follow.
Strengths: - The paper addresses an important problem in the GCL community and is engaging to read.
- The technical contribution of the paper is novel and brings a fresh perspective to the field.
- The proposed model is thoroughly evaluated on various benchmarks, demonstrating its effectiveness and providing strong evidence for its performance.
Weaknesses: - The experiment section may lack some essential components or details, and therefore it could be considered insufficient.
- The discussion on the limitations of the proposed method is relatively brief. It would be beneficial to expand on this section and provide a more comprehensive analysis of the limitations and potential challenges associated with the proposed approach.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - In Section 3, the authors observe the training imbalance in GCL and propose to address it using POT. Although they present node compactness results, they do not show the InfoNCE loss in the experiments, unlike in Figure 3. It would be helpful to include the InfoNCE loss in the experiment to provide a comprehensive evaluation. Additionally, it would be interesting to investigate if POT can also improve the InfoNCE loss.
- The paper discusses various graph augmentations, and it would be valuable for the authors to provide a more specific discussion on how the proposed POT method can enhance existing GCL methods with different graph augmentations. Exploring the applicability of POT to different augmentation strategies would contribute to a more thorough understanding of its potential benefits.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the positive comments and valuable feedback on our paper. To further address your concerns, we provide additional experiments as well as a more detailed discussion of the limitations.
1. > In Section 3, the authors observe the training imbalance in GCL and propose to address it using POT. Although they present node compactness results, they do not show the InfoNCE loss in the experiments, unlike in Figure 3. It would be helpful to include the InfoNCE loss in the experiment to provide a comprehensive evaluation. Additionally, it would be interesting to investigate if POT can also improve the InfoNCE loss.
**Response**: Thanks for pointing this out. In Section 3, we investigate the problem of existing GCL training with InfoNCE, since it is the only metric we have. After that, we propose "node compactness" as a more appropriate and intrinsic metric to evaluate the training of a node in GCL. Therefore, we conduct experiments in the Figures to show the validity of node compactness as well as some properties. That is the reason why we did not provide further analysis of InfoNCE.
However, it is still inspiring to investigate whether POT can improve InfoNCE. Intuitively, POT can improve the InfoNCE loss. POT has a larger regularization on the nodes that are not well-trained across all possible augmentations, as a result, the training of nodes under two specific augmentations is also improved. To illustrate this, we conduct an experiment comparing the InfoNCE loss values of the nodes with/without using POT, similar to the experiment in Section 3. We choose GRACE and GCA as baselines and show the result of Cora in Figure 1 of the rebuttal PDF. It can be seen that InfoNCE is improved by POT in two aspects: the average InfoNCE of the nodes is reduced, and the distribution of InfoNCE values becomes more centralized and balanced. Those results and discussions will be added to the Appendix in the revision.
2. > The paper discusses various graph augmentations, and it would be valuable for the authors to provide a more specific discussion on how the proposed POT method can enhance existing GCL methods with different graph augmentations. Exploring the applicability of POT to different augmentation strategies would contribute to a more thorough understanding of its potential benefits.
**Response**: Thank you for suggesting improving the experiment. To investigate whether our POT applies to various graph augmentations, we evaluate the performance of POT under four types of topology augmentation: random edge dropping proposed in GRACE; node centrality-based topology augmentations proposed in GCA, including degree centrality, eigenvector centrality, and PageRank centrality. The backbone of the GCL baseline is GRACE. The result on Cora and Flickr is given in Table 1 in the rebuttal PDF. POT consistently outperforms the baseline models with different graph augmentations.
As stated in the Limitation section, since more theoretical verifications are needed, POT of other augmentations, including feature augmentations, are working in progress. We are looking forward to more capability of POT.
3. > The discussion on the limitations of the proposed method is relatively brief. It would be beneficial to expand on this section and provide a more comprehensive analysis of the limitations and potential challenges associated with the proposed approach.
**Response**: Thanks for your suggestion. We have expanded the limitation section accordingly. Please refer to the "global" rebuttal. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for the acknowledgment of our paper and for many constructive comments. Since some reviewers mentioned that the discussion of limitations may be relatively brief, we expand that part as follows. Due to the limited time, we have tried our best to explore different aspects of possible limitations. A more detailed discussion will be added in the revision.
1. Applying POT to graph-level GCL
If the downstream task requires graph embeddings, POT still stands. First, related concepts like "graph compactness" can be defined similarly to node compactness. Second, the pooling operators like "MEAN", "SUM", and "MAX" are linear, therefore the form of "graph compactness" can be derived with some modifications on the steps in Appendix A.2.
2. The limitation on the network structure of the encoder
Definition 5 and Theorem 2 are based on the assumption that the encoder is GCN since GCN is the most common choice in GCL. However, other encoders are possible to be equipped with POT. GraphSAGE[1] has a similar form to GCN, then the derivation steps are similar. For Graph Isomorphism Network[2] and its form is
$$
h_v^{(k+1)}=\text{MLP}((1+\epsilon)h_v^{(k)}+\sum_{u\in \mathcal N(v)}h_u^{(k)}).
$$
We can first relax the nonlinear activation functions in MLP with Definition 5, then obtain the pre-activation bounds with Theorem 1, and finally follow the steps in Appendix A.2 to derive the form of node compactness when a GIN encoder is applied.
3. The provable training of feature augmentation
In this paper, we focus on topology augmentations. However, as we have mentioned in the limitation, feature augmentation is also a common class of augmentations. We explore the provable training for feature augmentation as follows. Unlike topology augmentation, the augmented feature is only the input of the first layer, so it is easier to deal with. Inspired by [3], we can relax the discrete manipulation into $l$-1 constraints, then the node compactness is related to the feasible solution of the dual form. A similar binary cross-entropy loss can be designed. More detailed derivations are still working in progress.
However, more efforts are still to be made to deal with some subtle parts.
We greatly thank you again for the reviewers' precious time in reading our paper and rebuttal. We hope that the rebuttal phase can be informative and pleasant for all reviewers.
References:
[1] Hamilton, Will, Zhitao Ying, and Jure Leskovec. "Inductive representation learning on large graphs." Advances in neural information processing systems 30 (2017).
[2] Xu, Keyulu, et al. "How powerful are graph neural networks?." arXiv preprint arXiv:1810.00826 (2018).
[3] Zügner, Daniel, and Stephan Günnemann. "Certifiable robustness and robust training for graph convolutional networks." Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019.
Pdf: /pdf/49d08bb05bcb3c130e696b6657aed0d978ad097a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Binary Radiance Fields | Accept (poster) | Summary: This paper introduces a new voxel grid radiance field representation in which the feature vectors are restricted to contain binary values. The motivation for this is to greatly reduce the storage requirements of voxel grid radiance fields. They use the straight-through estimator to allow backpropagation through the binary values. The representation combines a 3D hash grid with a triplane hash grid and uses trilinear or bilinear interpolation on the binary feature vectors. They conduct experiments on the Synthetic-NeRF, Synthetic-NSVF, and Tanks and Templs object datasets and compare against many other voxel grid radiance field methods. They show that they are able to achieve competitive rendering quality while using an order of magnitude less storage than the best-performing competitor.
Strengths: The idea is straightforward but novel to the best of my knowledge. They are able to achieve high-quality renders with a large reduction in storage requirements. It is surprising to see that a radiance field built only with binary feature vectors could perform so well. They also outperform other methods in which the radiance field is compressed as a post-process. This is probably because they are able to train the quantized model in an end-to-end manner, which is interesting to see.
There is also some innovation in the idea of combining a 3D voxel grid plus a triplane representation as usually only one or the other is used, as far as I know.
Weaknesses: They are missing a reference to Variable Bitrate Neural Fields:
Takikawa, T., Evans, A., Tremblay, J., Müller, T., McGuire, M., Jacobson, A., & Fidler, S. (2022, July). Variable bitrate neural fields. In ACM SIGGRAPH 2022 Conference Proceedings (pp. 1-9).
However they include other compression-based methods in the evaluation, and VBNF did not provide results on the standard benchmark datasets, so I can understand why they didn't include it in the evaluation.
It would have been nice to see videos in the supplemental material, which would helpful for appreciating the visual quality of the results, especially for dynamic scenes.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Is the model actually being stored as binary vectors during training and inference, or are you actually using 8-bit or larger integers (for example due to the way TinyCudaNN is implemented)? I wondered if it is actually feasible to represent binary vectors in this way using PyTorch, or if that is only a hypothetical currently.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The technical limitations appear reasonable; there is no mention of broader social impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thoughtful comments. Following your comments, we will cite the missing related work in the manuscript (Q1, Q2 in global response) and supplement per-scene videos for the results in the supplementary material (Q2). Also, we have elaborately explained several questionable parts of the detailed implementation of binary data (Q3). The detailed responses to your comments are as follows.
---
**Q1. However they include other compression-based methods in the evaluation, and VBNF did not provide results on the standard benchmark datasets, so I can understand why they didn't include it in the evaluation.**
Thank you for understanding the exclusion of VBNF (referred to as VQAD) [1] as our baseline model in evaluation. As the reviewer mentioned, it does not contain results for standard benchmark datasets that we use. Please refer to the Q2 in the global response for a more detailed comment.
---
**Q2. Addition of video for checking visual quality of the results, especially for dynamic scenes.**
Thank you for suggesting the supplement to the video. We will add per-scene videos for each dataset, including dynamic scenes. Furthermore, we plan to show these videos on the project page when available.
---
**Q3. Is the model actually being stored as binary vectors during training and inference, or are you actually using 8-bit or larger integers (for example due to the way TinyCudaNN is implemented)? I wondered if it is actually feasible to represent binary vectors in this way using PyTorch, or if that is only a hypothetical currently.**
During training and inference, the grid parameters are stored in floating-point data type since we implement our feature grid using tiny-cuda-nn [2] that only supports floating-point data type (16-bit or 32-bit).
Also, we currently cannot implement actual binary vectors because most common frameworks, including PyTorch, do not support data types of a single byte. Thus, it is only available to use 8-bit (1-byte) or larger data types to represent binary parameters. Accordingly, we still need to cast n (≥8) binary parameters and store them in the n-bit data type. Our optimized binary parameters are also stored in 8-bit integers due to such limitations.
As we mentioned in Sec.6 in the manuscript, we acknowledge that the un-optimized implementation is a limitation of our work. We expect it to be refined by implementing an optimal binary feature grid.
---
**Reference**
[1] Takikawa et al., Variable bitrate neural fields, SIGGRAPH 2022
[2] Müller, Tiny cuda neural networks, https://github.com/NVlabs/tiny-cuda-nn
---
Rebuttal Comment 1.1:
Comment: I have read over all of the reviews and the authors' responses. Thanks to their authors for their explanations and additional experimental results. I think the additional results and explanations strengthen the paper further, and I am still supportive of acceptance. | Summary: This paper proposes a new representation, binary radiance fields (BiRF), for memory-efficient novel view synthesis tasks. The representation is inspired by the binary neural network. BiRF is built upon Instant-NGP. The critical component of this representation is the binarization of real-valued feature grids, such that the resulting feature grids can store bitwise feature grids, which highly reduces the storage of NeRF models. BiRF also enhances the 3D voxel grid with three 2D plane grids, where the 2D features are incorporated to alleviate the hash collision. The training loss composes of the RGB loss and the sparsity loss. Experiments are conducted on the NeRF-synthetic dataset and NSVF dataset. Compared with state-of-the-art NeRF methods (data structure-based and compression-based methods), BiRF can obtain comparable or even higher reconstruction quality while requiring less storage. The ablation study also shows the effectiveness of introducing the 2D planes and sparsity loss.
Strengths: The paper is well written. The idea is simple but very effective and easy to implement. The insight of combining 3D feature grids and 2D plane grids is really cool. Overall, the method proposed in this paper is critical to the computer vision community (both the industrial and academia). For example, memory storage is an issue if we are reconstructing very large-scale scenes; there is also a demand for deploying NeRF models to mobile devices.
Weaknesses: The paper also mentioned a limitation is that it requires a longer time to train BiRF compared to its Instant-NGP counterpart, due to the binarization operation on real-valued feature grids. Moreover, I think the memory requirement during training can be higher than the non-binarized version since BiRF needs to maintain the temporal real-valued feature grids in addition to the binary feature grids.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The reconstruction quality of BiRF outperforms other methods by a large margin in terms of PSNR. But the reason seems not very clear since BiRF replaced the real-valued feature grids with the binary feature grids. In other words, the performance should drop (slightly) compared to the non-binary version (Instant-NGP). I think the performance gain is from the incorporation of 2D plane features. However, the authors did not provide ablations on with and without the binarization of their network architecture. I would definitely rate my score if the authors provide that.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Some minor typos are that l and L are not explained for Eq.(8), though it is obvious they're the number of grid levels. (the same issue for Eq (9)).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thoughtful comments. Following your comments, we have additionally performed more experiments on the binary feature encoding for training time (Q1), memory requirement (Q2), and reconstruction quality (Q3). The detailed responses to your comments are as follows. We have compared a real-valued (or un-binarized) feature grid and a binary feature grid. Please note that “real-valued” refers to a real-valued feature grid that does not apply binary feature encoding, and “binary” refers to a binary feature grid proposed in this work. The detailed responses to your comments are as follows.
---
**Q1. Longer training time of the proposed BiRF during training.**
We have evaluated the training time for 20K iterations of a real-valued feature grid and a binary feature grid, as shown in the below table. Although the increase in training time is the limitation of our model, it is not critical since we utilize a simple binarization -- a sign function. Also, the binarization procedure takes only a small portion of the whole pipeline so that such simple computation does not significantly impact the total training time. Furthermore, there are some cases where the binary grid converges faster than the real-valued grid since the sparsity of the optimized scene affects the training time, as described in Sec. 5.3 in the manuscript.
**Experiment 1) Evaluation of the training time for 20K iterations of a real-valued feature grid and a binary feature grid.**
||Ours-F1|Ours-F2|Ours-F4|Ours-F8|
|:---|:---:|:---:|:---:|:---:|
|***Synthetic-NeRF***|||||
|Real-valued|5.15 min|6.02 min|8.42 min|13.31 min|
|Binary|5.12 min|6.10 min|8.66 min|13.86 min|
|***Synthetic-NSVF***|||||
|Real-valued|5.31 min|6.23 min|8.77 min|14.08 min|
|Binary|5.17 min|6.22 min|8.93 min|14.53 min|
|***Tanks and Temples***|||||
|Real-valued|5.04 min|5.93 min|8.41 min|13.46 min|
|Binary|5.01 min|6.00 min|8.59 min|14.04 min|
---
**Q2. Higher memory requirement of the proposed BiRF during training.**
The memory requirement during training can be higher than the non-binarized version since BiRF needs to maintain the temporal real-valued feature grids and the binary feature grids.
We have evaluated the memory usage of a real-valued feature grid and a binary feature grid. As shown in the below table, there is *no noticeable increase in memory usage* due to binarization.
This is because we binarize only several grid parameters corresponding to a ray, *not a whole grid*. Thus, the additional memory usage from the binarization is not as much as we need to concern.
Also, the number of samples per ray is an important factor for memory usage. As we use an occupancy grid for efficient ray sampling, the memory usage is also affected by the sparsity of the optimized scene. This leads to less memory requirement of the binary grid for several scenes, despite the additional computation of binary feature encoding.
**Experiment 2) Evaluation of the memory requirement of a real-valued feature grid and a binary feature grid.**
||Ours-F1|Ours-F2|Ours-F4|Ours-F8|
|:---|:---:|:---:|:---:|:---:|
|***Synthetic-NeRF***|||||
|Real-valued|4.39 GB|5.46 GB|6.70 GB|10.04 GB|
|Binary|4.40 GB|5.45 GB|6.58 GB|10.06 GB|
|***Synthetic-NSVF***|||||
|Real-valued|4.36 GB|5.47 GB|6.02 GB|10.04 GB|
|Binary|4.37 GB|5.47 GB|6.30 GB|10.02 GB|
|***Tanks and Temples***|||||
|Real-valued|5.42 GB|6.52 GB|7.09 GB|11.10 GB|
|Binary|5.41 GB|6.51 GB|7.92 GB|11.10 GB|
---
**Q3. Can you provide ablations on the binary feature encoding? [Fig.4 in the attached PDF]**
We have additionally performed ablations on the binary feature encoding. Specifically, we have compared the rendering quality of a real-valued feature grid denoting “real-valued” and a binary feature grid denoting “binary.” As shown in Fig.4 in the attached PDF, there is a drop in rendering quality when we use the binary grid rather than the real-valued grid with the same resolution setting. Nonetheless, we need to focus on the fact that the binary grid achieves a highly compact model size with an impressive compression rate. Also, please note that our binarized grid with multi-bit is comparable to the similar or smaller storage size of a real-valued grid (e.g., F1 of real-valued vs. F8 of binary).
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: The authors have answered all of my questions. I decided to mantain my rating for this paper. | Summary: The paper proposes a novel approach called binary radiance fields (BiRF) for efficient storage and representation of radiance fields. BiRF utilizes binary feature encoding, where local features are encoded using binary parameters of +1 or -1. This compact encoding significantly reduces storage size and computational costs. The authors introduce a binarization-aware training scheme and extend the multi-resolution hash encoding architecture to a 3D voxel grid and orthogonal 2D plane grids.
The contributions of the paper are 1. The introduction of binary radiance fields (BiRF) as a storage-efficient representation that encodes features using binary parameters. 2. A binarization-aware training scheme that effectively captures feature information and updates binary parameters during optimization. 3. The demonstration of superior reconstruction performance with minimal storage space usage, achieving impressive results in static scene reconstruction.
Strengths: 1. The paper introduces a novel approach called binary radiance fields (BiRF) for representing radiance fields using binary feature encoding. This idea of employing binary parameters to represent local features in radiance fields is innovative and distinguishes it from traditional methods. The application of binarization-aware training and the extension of multi-resolution hash encoding to a hybrid structure further contribute to the originality of the approach.
2. The paper demonstrates high-quality research through rigorous experimentation and evaluation. The proposed BiRF representation outperforms state-of-the-art methods in terms of reconstruction performance while utilizing significantly less storage space. The experiments conducted on various scene datasets provide compelling evidence of the effectiveness and efficiency of the proposed approach.
3. The paper is well-written and effectively communicates the concepts and methodologies to the readers. The authors provide clear explanations of the key ideas, including the binary feature encoding, binarization-aware training scheme, and the hybrid structure of the feature grid. The organization of the paper enables easy comprehension of the research objectives, methodology, and experimental results.
4. The paper's contributions have significant implications for the field of radiance fields and 3D scene modeling. By introducing the BiRF representation, the authors address the critical challenge of storage efficiency in radiance field models, which can greatly impact practical applications. The superior reconstruction performance achieved by BiRF, coupled with its minimal storage requirements, opens up new possibilities for real-world implementation and broader accessibility of radiance fields.
Weaknesses: While the paper demonstrates several strengths, there are also a few areas where it could be improved:
Experimental Evaluation: While the paper presents compelling results by reporting the model size and psnr, the paper lacks a quantitative evaluation and comparison of the inference speed with other relevant methods such as TensoRF and Instant-NGP. To what extent does the hybrid 3D and 2D feature grid architecture impact the inference speed & training speed of the proposed model? As the backward gradient to the grid is estimated & approximated, will the binary design affect the convergence speed?
What is more, it seems the reported training speed of TensoRF is slower than the original paper.
Results: In Figure 2, I find the the results of K-Planes on Synthetic NSVF dataset are significant worse than other methods, certain analysis of the inferior results should be discussed. It is crucial to consider and provide information about the training time and inference speed, as these factors play a significant role in assessing the effectiveness and practicality of the proposed NeRF model.
Hash Collision: In original Instant-NGP, the hash collision is explicit handled as the largest gradients—those most relevant to the loss function—will dominate the optimization, the multi-scale will also alleviate it as collision is statistically unlikely to occur simultaneously at every level for a given pair of points. For this proposed BiRF, how about the situation of hash collision compared with instant-ngp? With binary code, it seems the multi-scale design will be less effective to prevent hash collision.
Binarization of learnable parameter: The binarization with straight-through-estimator is a special case vector quantization or discrete representation, which has been explored in some recent NeRF research [1,2]. The paper could benefit from a more thorough discussion of these related works. Besides, there appears to be some duplication of content between lines 165-168 and lines 148-151. Streamlining these sections would improve the clarity of the paper.
Reference:
[1] Variable bitrate neural fields
[2] General neural gauge fields
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I would suggest the author to provide an evaluation of the inference speed with other relevant methods such as TensoRF and Instant-NGP, and analyze the effect of 3D&2D feature grid and estimated gradient.
2. It would be helpful to discuss the potential reasons for this performance gap of K-plans and provide an analysis of the inferior results.
3. How does the proposed BiRF model handle hash collision compared to Instant-NGP?
4. It would be beneficial to provide a more thorough discussion of these related works and explain the specific connections and differences between the proposed BiRF approach and the existing literature.
5. Streamlining the mentioned sections to enhance the clarity of the paper.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation is discussed in this work.
There is no concern of potential societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful comments. Following your comments, we have performed more experiments on inference speed (Q1 in global response, Q1) and convergence speed (Q1, Q2). Also, we have clearly explained several questionable parts (Q3, Q4) and the strategy for hash collision (Q5). Furthermore, we will cite the additional related works (Q2 in global response). The detailed responses to your comments are as follows.
---
**Q1. Impact of 2D-3D hybrid feature grid architecture on the inference and training speed.**
We have further evaluated the inference speed (FPS) and training time (min) of different grid designs: (a) tri-plane representation, (b) voxel representation, and (c) our 2D-3D hybrid representation. Note that the number of parameters of the three representations is similar.
As shown in the below table, the inference and training speed of 2D-3D hybrid representation is faster than tri-plane (2D), while slower than voxel grid (3D). This is because the tri-plane representation needs more computations to get local feature values. Specifically, local features in the voxel grid are computed by linear interpolation of 9 nearest points. In contrast, the tri-plane needs to perform linear interpolation of 4 nearest points for three 2D planes each, a total of 12 points. Thus, a more computational cost of the tri-plane leads to a slower speed. In conclusion, the training and rendering speed of our 2D-3D hybrid representation is positioned in the middle of the tri-plane and voxel representation as ours combines them.
**Experiment 1) Comparison of inference speed (up) and training time (bottom) according to the different grid design.**
||Synthetic-NeRF|Synthetic-NSVF|Tanks and Temples|
|:---|:---:|:---:|:---:|
|***Inference speed$\uparrow$ (fps)***|
|Tri-plane (only 2D)|3.56|3.98|0.71|
|Voxel (only 3D) |4.46|4.79|0.82|
|Tri-plane (only 2D)|3.83|4.64|0.91|
|***Train time$\downarrow$ (min)***|
|Tri-plane (only 2D)|8.69|8.34|8.30|
|Voxel (only 3D) |5.37|5.51|5.28|
|Tri-plane (only 2D)|6.10|6.22|6.00|
---
**Q2. Impact of binary feature encoding on the convergence speed. [Fig. 3 in the attached PDF]**
Following your suggestion, we have explored the impact of binary feature encoding on the convergence speed by comparing a real-valued feature grid (w/o binary feature encoding) and a binary feature grid (w/ binary feature encoding). As shown in Fig. 3 in the attached PDF, we have evaluated the rendering quality of these two different feature grids according to the training time. We have validated five test views for each scene to reduce time-consuming. In the early stages, we observe that the binary grid converges faster than the real-valued grid. However, the real-valued grid outperforms within 1 min, and both grids reach the fully converged performance at a similar time. Therefore, there is no degradation of convergence speed due to binarization, despite the different rendering quality.
---
**Q3. Slow training speed of TensoRF reported in the manuscript compared to the original paper.**
We have followed the official code of TensoRF [1] with the default configuration. Different environmental settings, such as hardware might cause the inconsistency. We use a single NVIDIA RTX A6000, while the authors of TensoRF use a single NVIDIA Tesla V100. As we have performed all experiments, including baselines, in the same environmental setting, the fairness of our comparison is ensured.
**Experiment 3) Comparison of training time for TensoRF models.**
||TensoRF-CP-384|TensoRF-VM-192|
|:---|:---:|:---:|
|Original paper|25.2 min|17.4 min|
|Ours|24.7 min|21.5 min|
---
**Q4. Performance gap of K-Planes on the Synthetic NSVF dataset in Fig. 2.**
Thank you for your detailed concern about the results. As shown in Table 7 in the appendix, the score of K-Planes for the *Lifestyle* scene is noticeably lower than other models. In the *Lifestyle* scene, we have found a severe artifact that causes low performance and does not disappear by changing the random seed number. This result might yield confusion, as pointed out, so we will add comments for the failure case. If reviewers indicate that the K-Planes results for the Synthetic-NSVF dataset are not reasonable, we are willing to exclude them.
---
**Q5. How about the situation of hash collision compared with instant-ngp? [Fig. 5 in the attached PDF]**
The multi-scale design also helps to prevent the hash collision. Still, the key idea mitigating the hash collision of our model is the 2D-3D hybrid grid representation alleviating the hash collision explicitly.
In Instant-NGP [2], the main idea of a hash grid is to represent a large number of grid points with only a small size of the hash table. Thus, we cannot avoid the hash collision, which means more than two grid points are mapped to the same index in the hash table. To reduce the frequency of hash collisions, we need to decrease the number of grid points we represent. Therefore, we consider a 2D hash grid of $O(N^2)$ whose number of grid points is less than a 3D hash grid of $O(N^3)$ to alleviate the hash collision, where N is the resolution of the grid.
As shown in Fig. 5 in the attached PDF, we have quantified the average frequencies of hash collision according to different grid designs, 2D grid and 3D grid. There are severe hash collisions in the 3D grid at higher resolutions, while the 2D grid yields fewer hash collisions even at higher resolutions. Therefore, we combine 2D hash grids, which have less hash collision frequency, with a 3D hash grid to improve the restricted performance due to severe hash collisions. As a result, this 2D-3D feature grid can obtain higher rendering quality, as shown in Table 1 of the manuscript.
---
**Reference**
[1] Chen et al., Tensorf: Tensorial radiance fields, ECCV 2022
[2] Müller et al., Instant neural graphics primitives with a multiresolution hash encoding, SIGGRAPH 2022
---
Rebuttal Comment 1.1:
Comment: The response clearly solves my concerns. Thus, I improve my final rating from 5 to 6. | Summary: This paper proposes a novel binary radiance fields (BiRF) which binarized the feature encoding to save memory usage of NeRF. In the experiments, the binary radiance field representation demonstrates superior reconstruction performance compared to state-of-the-art efficient radiance field models, all while requiring lower storage allocation. Notably, the proposed model achieves remarkable results in reconstructing static scenes, achieving PSNR values of 31.53 dB for Synthetic-NeRF scenes, 34.26 dB for Synthetic-NSVF scenes, and 28.02 dB for Tanks and Temples scenes. These impressive outcomes are attained using minimal storage space, with only 0.7 MB, 0.8 MB, and 0.8 MB utilized, respectively. The intention behind introducing the binary radiance field representation is to eliminate storage bottlenecks and make radiance fields more accessible for various applications.
Strengths: 1. This paper is well-organized
2. Experiments are convincing and extensive.
Weaknesses: 1. Only binarizing the feature encoding to save the memory usage is limited, since the acceleration rate is also import in network quantization and the inference of NeRF.
2. The proposed binarization of learnable parameter is limited of novelty. The analysis or discuss about bottleneck of NeRF quantization or binarization is lack, which is crucial.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your comments. We have found that your concerns are mainly from two sources: the limited contribution of binary feature encoding (Q1) and the analysis of the bottleneck of NeRF quantization (Q2). Thus, we focus on addressing these concerns in this rebuttal. The detailed responses to your comments are as follows.
---
**Q1. Only binarizing the feature encoding to save the memory usage is limited, since the acceleration rate is also import in network quantization and the inference of NeRF.**
We agree that acceleration is one of the critical issues for more practical NeRF. However, we aim to address the issue of large storage size, which restricts the accessibility of recent radiance fields. Since this storage problem yields a bottleneck in NeRF advancement, several works have also been concerned with the seriousness and handled it with post-quantization [1, 2, 3] and post-optimization [3, 4]. Nonetheless, there is a limitation, as we described in Sec. 2 in the manuscript, so we propose our BiRF to solve the large storage problem by adopting binary feature encoding.
Although acceleration is not our main target, we have considered it. First, we employ a hash grid of instant-NGP [5], one of the SOTA methods having both fast convergence and inference time, as the base grid implementation of the proposed BiRF. Moreover, we regularize the sparsity of the scene to achieve further improvement in rendering speed, as described in Sec. 4.3 & Sec. 5.3 in the manuscript. There is room for further improvement of inference speed since our binary feature encoding can be easily applied in other feature encoding models, as described in Sec. 5.4 in the manuscript.
---
**Q2. The proposed binarization of learnable parameter is limited of novelty. The analysis or discuss about bottleneck of NeRF quantization or binarization is lack, which is crucial. [Fig. 6 in the attached PDF]**
Although we introduced several works (PlenOctrees [1], PeRFception [2], Re:NeRF [3]) that employ quantization in L107-108, it might be insufficient to fully explain the previous NeRF quantization approaches.
Previous methods [1, 2, 3] apply 8-bit quantization for their learned feature values after optimization to reduce the final storage size of the radiance field model. However, there is a severe degradation of rendering quality when we try to quantize the feature encoding parameters to a lower bit, as shown in the below table and Fig. 6 in the attached PDF. This is because post-quantization yields information loss, existing as a bottleneck of NeRF quantization.
To quantify the loss from the post-quantization, we have evaluated the rendering quality of the quantized models (1-, 2-, 3-, 4-, 8-bit) using post-quantization methods following previous methods [1, 2, 3]. To be specific, we first optimize the models using a binary feature grid (w/ binary feature encoding) denoting “ours (1-bit)” and a real-valued feature grid (w/o binary feature encoding) denoting “base (16-bit)”. After then, we quantize the optimized real-valued feature grid from 16-bit to n-bit, denoting “post-quant (n-bit)”. As shown in Fig. 6 in the attached PDF and the below table, there is a significant drop in rendering quality as the parameters are quantized to a lower bit. Binarization (or 1-bit quantization) especially leads to severe information loss, so we can not observe the texture of the target scene. In contrast, our model still has high rendering quality despite using 1-bit data since we update the binary parameters during optimization. Therefore, we expect that our binarization strategy solves the existing bottleneck of NeRF quantization and can be considered an important contribution to our work.
**Experiment 1) Comparison of the reconstruction performance according to the post-quantization. "Post-quant. (n-bit)" denotes post-quantization to n-bit data.**
||PSNR|SSIM|LPIPS|
|:---|:---:|:---:|:---:|
|Post-quant. (1-bit)|16.85|0.797|0.219|
|Post-quant. (2-bit)|17.85|0.933|0.078|
|Post-quant. (3-bit)|25.33|0.958|0.048|
|Post-quant. (4-bit)|31.61|0.962|0.041|
|Post-quant. (8-bit)|33.68|0.963|0.039|
|Ours (1-bit)|32.64|0.959|0.049|
|Base (16-bit)|33.69|0.963|0.039|
---
**Reference**
[1] Yu et al., Plenoctrees for real-time rendering of neural radiance fields, ICCV 2021
[2] Jeong et al., PerfCeption: perception using radiance fields, NeurIPS 2022
[3] Deng et al., Compressing explicit voxel grid representations: fast nerfs become also small, WACV 2023
[4] Li et al., Compressing volumetric radiance fields to 1 mb, CVPR 2023
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Since the discussion with authors is closing soon, could you please go over the reviews and rebuttals, and respond to the content of the authors response with a message to the authors (you can post with one message summarizing all such reviews). It is important that authors receive a reply to their rebuttals, as they have tried to address comments raised by the reviewers.
-AC | Rebuttal 1:
Rebuttal: **Global response**
Dear reviewers,
We thank all reviewers for their insightful feedback. As highlighted by reviewers, our paper proposes concise (EsYg, fKiM, VKML) and innovative (ptCH, VKML) ideas and is well-written (Bcw2, ptCH, fKiM). Also, we are delighted that the reviewers thoroughly agree on the importance of storage efficiency for NeRF models (EsYg, ptCH, fKiM). As additional verification and explanation are required by the reviewers, we have tried to respond to all these valuable concerns within a limited rebuttal period. Typos and additional results will be updated in our revision. Please refer to the attached PDF containing figures for global response and response to each reviewer.
---
**Shared response for inference speed**
**Q1. Evaluation on the inference speed (EsYg, ptCH). [Fig.1 in the attached PDF]**
We have evaluated the inference speed of our models and baselines, as shown in Fig.1 in the attached PDF and the below table. Despite the highly compact storage size, our models demonstrate a comparable inference speed, especially Ours-F2 shows 3.83 fps, 4.64 fps, and 0.82 fps for each dataset, respectively. However, several models (DVGO [1], Plenoxels [2], Instant-NGP [3]) still show faster rendering speed, so we need to consider adopting an acceleration method, such as an efficient ray sampling or binary operation, as our future work. There are no significant obstacles since our binarization strategy can be applied to any feature encoding method as a plug-in. Additionally, the slow speed of Ours-F1 is from the inefficiency of atomic half-precision accumulation implemented in tiny-cuda-nn [4], a library for our hash grid implementation.
**Experiment 1) Comparison of inference speed (FPS).**
We reported the average FPS of each dataset.
||Synthetic-NeRF|Synthetic-NSVF|Tanks and Temples|
|:---|:---:|:---:|:---:|
|DVGO|5.90|6.69|1.29|
|Plenoxels|10.35|7.52|2.31|
|TensoRF-CP|0.71|0.77|0.16|
|TensoRF-VM|1.20|1.25|0.28|
|CCNeRF-CP|1.16|1.22|0.24|
|CCNeRF-HY|1.01|1.07|0.22|
|Instant-NGP|3.90|4.23|1.36|
|K-Planes-explicit|0.88|0.91|0.24|
|K-Planes-hybrid|0.75|0.91|0.29|
|Ours-F1|3.70|4.45|0.80|
|Ours-F2|3.83|4.64|0.82|
|Ours-F4|3.41|4.13|0.64|
|Ours-F8|2.72|3.45|0.45|
---
**Shared response for related work**
**Q2. Discussion on additional related work (ptCH, VKML).**
Thank you for noticing more relevant works. We will cite the mentioned papers [5, 6] in related work. In particular, VQAD [5] successfully compresses the feature grid parameters into a small codebook with learned indices rather than using a hash function. Nonetheless, expensive data is still needed for each grid point to represent the learned indices of the codebook, while ours needs only binary information. Moreover, as commented by reviewer VKML, it does not directly fit for representative NeRF benchmark we use. Specifically, it may require depth information for pre-processing, but the datasets we used do not contain depth maps. Please understand excluding VQAD in our comparison, despite its strengths.
---
**Shared response for the manuscript**
**Q3. Modification of the manuscript on typos, duplicated contents, and missing explanations for variables (EsYg, ptCH, fKiM).**
Thank you for noticing the parts that need to be modified in the manuscript. We will correct the minor typos in L104 & L135 and add explanations for variables in L191 & L197. Moreover, we have found that L148-151 and L165-168 contain similar contents, as reviewer ptCH mentioned. We will modify to briefly introduce the use of STE in L165-168 because it is already described in L148-151. When it is available, we will update all of them in the revised manuscript.
---
**Reference**
[1] Sun et al., Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction, CVPR 2022
[2] Fridovich et al., Plenoxels: Radiance fields without neural networks, CVPR 2022
[3] Müller et al., Instant neural graphics primitives with a multiresolution hash encoding, SIGGRAPH 2022
[4] Müller, Tiny cuda neural networks, https://github.com/NVlabs/tiny-cuda-nn
[5] Takikawa et al., Variable bitrate neural fields, SIGGRAPH 2022
[6] Zhan et al., General neural gauge fields, ICLR 2023
Pdf: /pdf/7d701bb334fc868b4284c57bc8740054aadf89aa.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the authors are proposing BiRF (BInary Radiance Fields), a storage-efficient representation for neural radiance fields. The technique relies on a hybrid representation that leverages explicit feature grids (both one 3D and three 2D, each at multiple resolutions) combined with density and color MLPs. To achieve high storage efficiency, the feature grids are binarized following the technique from Binarized Neural Networks. The reconstruction loss includes a sparsity inducing loss (similarly to SNeRG and Plenoxels).
The main technical contributions of the paper are this specific storage-efficient representation of neural radiance fields with a matching training scheme and an array of comparisons against other methods.
The authors demonstrate results through a number of quantitative evaluations on the Synthetic-NeRF, Synthetic-NSVF and Tanks&Temples datasets and against multiple baselines (fast ones: DVGO, Plenoxels, TensoRF, CCNeRF, Instant-NGP and K-Planes and compact ones: Re:NeRF, VQRF):
- the proposed representation is indeed very storage efficient, with < 1 MB at acceptable quality,
- it generally delivers quality reconstructions with lower storage requirements compared to either efficient or compressed alternatives,
- training time remains reasonable (only behind Instant-NGP, DVGO and Plenoxels depending on the operating point).
Lastly, the storage efficiency of the technique is showcased to be relevant for an application on dynamic scenes.
Strengths: Storage efficiency is an important aspect of making neural radiance fields more practical for concrete applications. While there are several neural radiance fields approaches focusing on trading rendering (and training) speed at the expense of storage efficiency and other approaches tackling storage efficiency separately by compressing a trained representation, the proposed approach addresses storage efficiency directly without necessitating a post-processing step after training and also without sacrificing quality.
The approach offers competitive operating points in terms of quality v.s. storage, while offering reasonable training time.
The method is conceptually simple and the paper does a solid job at presenting it, demonstrating its value through both quantitative and qualitative experiments, including a large number a baselines to compare against. The ablations are also thorough.
Weaknesses: Novelty is limited as this is essentially an application of Binarized Neural Networks to neural radiance fields approaches.
As quality is one of the claims of the approach, a comparison against a few non-compressed and non-optimized baselines (e.g. original NeRF, mip-NeRF) would have made sense.
The approach is exploring with good results another trade-off compared to the efficiency-oriented techniques that sacrifice storage. However, the only mention of rendering speed (at test time) are in the ablation on the sparsity loss and in the limitations section. The apparently unoptimized implementation is another weak point of the submission and a comparison of rendering speed, especially against the considered baselines would have also made sense.
Minor corrections:
- l.104 Pleoxels -> Plenoxels
- l.135 consider the -> consider an
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Following the above, would the authors be able to include a thorough comparison of rendering speed against the considered baselines?
Binarized Neural Networks, which this paper is following, was using in some of their experiments stochastic binarization (as a form of regularization), is this something that has been considered?
Table 3 in the supplementary material suggests increasing, have the authors tried going beyond {2^19,2^21}? Are the results plateau-ing?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful comments. We have additionally performed more experiments following your comments: inference speed (Q1 in global response), more baselines (Q1), and extended ablation study (Q3). Also, we positively consider adopting stochastic binarization for further improvement of binary feature encoding (Q2). The detailed responses to your comments are as below.
---
**Q1. Evaluation on original NeRF and mip-NeRF.**
We provide additional results for original NeRF [1] and mip-NeRF [2] in our comparison. We have followed the default architectural setting of the original paper using the code implemented in NeRF-Factory [3]. We optimize both NeRF and mip-NeRF for 300K iterations with a batch size of 4,096 on a single NVIDIA RTX A6000. It takes about 24 hours to converge for a single scene.
As shown in the below table, both NeRF and mip-NeRF show less reconstruction quality with a larger storage size compared to Ours-F2. Despite the lightweight network architecture of implicit NeRFs, we have proved that our model (Ours-F2) demonstrates higher performance in a more compact storage size than NeRF and mip-NeRF.
**Experiment 1) Quantitative evaluation of original NeRF and mip-NeRF.**
(We reported the averaged score of each dataset.)
||Size (MB)|PSNR|SSIM|LPIPS|
|:---|:---:|:---:|:---:|:---:|
|***Synthetic-NeRF***|
|NeRF [1] |4.6|31.69|0.951|0.065|
|Mip-NeRF [2]|2.3|32.20|0.955|0.062|
|**Ours-F2**|**1.4**|**32.64**|**0.959**|**0.049**|
|
|***Synthetic-NSVF***|
|NeRF [1] |4.6|34.46|0.967|0.044|
|Mip-NeRF [2]|6.1|35.33|0.971|0.039|
|**Ours-F2**|**1.5**|**35.40**|**0.976**|**0.024**|
|
|***Tanks and Temples***|
|NeRF [1] |4.6|27.58|0.902|0.171|
|Mip-NeRF [2]|6.1|27.77|0.901|0.171|
|**Ours-F2**|**1.5**|**28.44**|**0.916**|**0.122**|
---
**Q2. Consideration of stochastic binarization.**
Thank you for suggesting stochastic binarization [4]. We agree that stochastic binarization might yield an interesting regularization effect. Despite the strengths, our current approach does not consider stochastic binarization. The reason is that we wanted to optimize a 3D scene as fast as possible. Thus, it is beneficial to use simple and effective deterministic binarization rather than stochastic binarization that generates random numbers.
We have compared the performance of stochastic binarization and deterministic binarization (ours), as shown in the below table. As a result, applying stochastic strategy directly tends to be unstable during optimization. Compared to deterministic binarization, stochastic binarization takes a longer time and shows lower rendering quality. However, properly combining stochastic strategy with deterministic binarization may lead to better performance. Consequently, we consider employing regularization via stochastic binarization for further improvement in our future work.
**Experiment 2) Comparison of binarization using Ours-F2.**
(We reported the averaged score of each dataset.)
||Train time (min)|PSNR|SSIM|LPIPS|
|:---|:---:|:---:|:---:|:---:|
|***Synthetic-NeRF***|
|Stochastic |14.16|29.81|0.924|0.093|
|Deterministic| 6.10|32.64|0.959|0.049|
|***Synthetic-NSVF***|
|Stochastic |11.53|27.64|0.913|0.108|
|Deterministic| 6.22|35.40|0.976|0.024|
|***Tanks and Temples***|
|Stochastic |9.13|27.77|0.888|0.174|
|Deterministic|6.00|28.44|0.916|0.122|
---
**Q3. Further ablation on the hash table size described in Table 3 in the supplementary material. [Fig. 2 in the attached PDF]**
Thank you for noticing the extensive ablation on the hash table size {$\log{T_{2D}}$, $\log{T_{3D}}$}, where $T_{2D}$ and $T_{3D}$ denote the hash table size of the 2D and 3D grid, respectively. We have performed additional ablations on the smaller and larger hash table size, {13, 15} and {21, 23}. As shown in Fig. 2 in the attached PDF and the below table, it is no longer beneficial to use a larger hash table {21, 23} because there is only a slight improvement in reconstruction quality while the storage size significantly increases. In contrast, a significant drop in rendering quality is observed when we use a smaller size of the hash table {13, 15} due to severe hash collisions. The results show that hash collision does not considerably affect the performance from the size {17, 19}, so we choose {17, 19} as the default setting of our model.
**Experiment 3) Additional ablation on the hash table size {$\log{T_{2D}}$, $\log{T_{3D}}$}. Results are averaged over all scenes of the Synthetic-NeRF dataset.**
||Ours-F1||Ours-F2||Ours-F4||Ours-F8||
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
||**Size (MB)**|**PSNR**|**Size (MB)**|**PSNR**|**Size (MB)**|**PSNR**|**Size (MB)**|**PSNR**
|{13, 15}|0.12|29.20|0.18|30.66|0.31|32.00|0.58|32.90|
|{15, 17}|0.27|30.66|0.46|32.03|0.87|32.99|1.73|33.46|
|{17, 19}|0.72|31.53|1.14|32.64|2.83|33.26|5.76|33.59|
|{19, 21}|1.94|31.61|3.99|32.71|7.84|33.20|16.61|33.51|
|{21, 23}|7.05|31.93|14.70|32.89|29.65|33.31|61.35|33.55|
---
**Reference**
[1] Mildenhall et al., Nerf: Representing scenes as neural radiance fields for view synthesis, ECCV 2020
[2] Barron et al., Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields, ICCV 2021
[3] Kakao Brain, Nerf-factory: an awesome pytorch nerf collection, https://github.com/kakaobrain/nerf-factory
[4] Hubara et al., Binarized neural networks, NeurIPS 2016
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Since the discussion with authors is closing soon, could you please go over the reviews and rebuttals, and respond to the content of the authors response with a message to the authors (you can post with one message summarizing all such reviews). It is important that authors receive a reply to their rebuttals, as they have tried to address comments raised by the reviewers.
-AC
---
Rebuttal Comment 1.2:
Comment: I have read the rebuttal and have been following the discussions. I want to thank the authors for the overall rebuttal and I do appreciate them taking the time to answer all questions from the reviewers including mine.
All the remarks I had have been addressed. The comparison against unoptimized baselines is convincing and can help support the quality claim. At this point, the remaining weaknesses are the novelty and the implementation whose efficiency could have been pushed further. I am thus upgrading my overall rating from borderline accept to weak accept. | null | null | null | null | null | null |
Learning-to-Rank Meets Language: Boosting Language-Driven Ordering Alignment for Ordinal Classification | Accept (poster) | Summary: The paper presents a novel language-driven ordering alignment method called L2RCLIP for ordinal classification. The authors leverage pre-trained vision-language models to incorporate rich ordinal priors from human language. They propose RankFormer, a prompt tuning technique that enhances the ordering relation of rank prompts using token-level attention and residual-style prompt blending. Additionally, they introduce a cross-modal ordinal pairwise loss to refine the CLIP feature space, ensuring semantic and ordering alignment between texts and images. The proposed method is evaluated on facial age estimation, historical color image classification, and aesthetic assessment tasks, showing promising performance.
Strengths: 1. The paper introduces a novel method that leverages language priors to address the overfitting issue in ordinal classification.
2. The experimental results indicate that the proposed method achieves promising performance on the evaluated tasks, suggesting its effectiveness in addressing the overfitting problem.
Weaknesses: N/A
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to the Reviewer v94e
Comment:
Thank you for your positive review and constructive feedback. | Summary: The paper presents L2RCLIP, a novel language-driven ordering alignment method for ordinal classification. The authors propose to leverage the rich ordinal priors in human language by converting the original task into a vision-language alignment task. The method introduces a complementary prompt tuning technique called RankFormer, designed to enhance the ordering relation of original rank prompts. In addition, the authors propose a cross-modal ordinal pairwise loss to refine the CLIP feature space, where texts and images maintain both semantic alignment and ordering alignment. The method is evaluated on three ordinal classification tasks, including facial age estimation, historical color image (HCI) classification, and aesthetic assessment, showing promising performance.
Strengths: The proposed loss function is an interesting contribution to the field, as it provides a new perspective on viewing cross-entropy loss within the context of ordinal regression. The authors' approach to incorporating language priors and restructuring the cross-modal embedding space using cross-modal ordinal pairwise loss is innovative and well-presented.
The method demonstrates impressive performance on various ordinal classification tasks, outperforming previous state-of-the-art methods. This indicates the potential of L2RCLIP in addressing real-world problems related to ordinal classification, including facial age estimation, historical image dating, and image aesthetics assessment.
Weaknesses: The comparison with previous methods seems unfair, as the authors use a much stronger image backbone from CLIP, while previous papers use VGG16 as their backbone. This could be a significant factor contributing to the improved performance of L2RCLIP. It would be beneficial for the authors to provide a fair comparison by also evaluating their method using a similar backbone to previous works.
In Figure 3, some unexpected spikes around age 10 and some cube-like patterns are observed, indicating that the ordinality score is not as smooth as expected, despite being higher than previous methods. The authors could consider reducing the window of comparisons and evaluating the ordinality score more comprehensively. This would provide a better understanding of the method's performance in terms of ordinality.
The claim of Rank-specific prompts being effective in enhancing the ordering relation is too strong, and there is no experimental evidence to support it. While token mix may increase computation and information flow, it does not necessarily guarantee the ordinal property of rank features or the final textual features. The authors should provide experimental evidence or a more detailed explanation to support this claim.
The paper could be improved by describing the final loss composition in the main text, which is currently missing. Providing a clear explanation of how the different components of the loss function are combined would help the reader better understand the proposed method.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Can the authors provide a fair comparison by evaluating L2RCLIP using a similar backbone to previous works?
How can the ordinality score be better evaluated to provide a more comprehensive understanding of the method's performance?
Could the authors provide experimental evidence or a more detailed explanation to support the claim of Rank-specific prompts being effective in enhancing the ordering relation?
Can the authors describe the final loss composition in the main text, providing a clear explanation of how the different components of the loss function are combined?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors need to analysis the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer 6B9T
We sincerely appreciate your positive review and valuable comments. Please find our responses below.
***
**Q1: Compred with previous method with the same architecture**
**[Reply]** Our method doesn't show promising result using VGG16, which may be reasonable since our two key designs based on the well-aligned text-image latent space. For fairness, we keep the same experiment setting and retrain the interpolation-based method based on OrdinalCLIP. The results and training details are as follows:
**Table 1. Results on Morph, CLAP2015 and Adience datasets**
|Method| Morph (MAE) | CLAP2015 (MAE) | Adience (MAE) | Adience (Acc.)
|---|---|---|---|---|
|L2RCLIP-I|2.19|2.78| 0.42 ± 0.06|62.9 ± 5.5|
|L2RCLIP(ours)|**2.13**|**2.62**| **0.36 ± 0.05** |**66.2±4.4**|
**Table 2. The MAE results under the distribution shift setting on the MOPRH II**
|re cls - re smp| 10-90| 20-80| 20-90| 30-80| 30-90| 40-80| 40-90|
|---|---|---|---|---|---|---|---|
|L2RCLIP-I| 2.39| 2.45 |2.50 |2.57| 2.70| 2.73| 2.93|
|L2RCLIP(ours)| **2.30**| **2.37**|**2.43**|**2.51** |**2.61**| **2.68**| **2.79**|
**Table 3. The MAE results under few shot setting on the MOPRH II**
|#shots |#1| #2| #4| #8 |#16| #32| #64|
|---|---|---|---|---|---|---|---|
|L2RCLIP-I| **4.31** |4.02| 3.63 |3.48| 3.13| 2.80| 2.62|
|L2RCLIP(ours)| 4.54| **3.92** |**3.40** |**3.28**| **2.81**| **2.55**| **2.38**|
For fairness, we use the official code of OrdinalCLIP for interpolation. We conduct three groups of experiment to verify the effectiveness of our methods. **First**, as illustrated in Table 1, our method outperforms interpolation-based methods with a significant margin in experiments involving a large number of rank categories. This outcome is attributable to the challenge posed by direct interpolation methods in modelling complex ordering relationships. **Second**, our methods exhibits superior performance in the majority of few-shot learning tasks and distribution shift tasks, when compared to interpolation-based methods. Collectively, these experiments corroborate the effectiveness of the methods proposed in this study.
**The detail for L2RCLIP-I**: Firstly, we utilize the ViT-B/16 visual backbone of CLIP for image feature extraction, whereas OrdinalCLIP employs a pre-trained VGG-16 network supplemented by a linear projection layer. Secondly, our method relies on a two-stage training strategy, in contrast to the one-stage approach adopted by OrdinalCLIP.
**Q2: The ordinality score with less ranks.**
**[Reply]** Thanks for your suggestions. We reduce the number of ranks for better visualization. **The results are shown in the .pdf file.**
**Q3: The ordinal property learned by RankFormer.**
**[Reply]** To avoid token mixing effect, we conduct global context prompt ablation study. The results are as follows.
**Table 4. Ablation study of global context prompts.**
|Method |Morph (MAE)| Morph (OS)| CACD (MAE)| CACD (OS)|
|---|---|---|---|---|
|Vanilla CLIP |6.91 |55.36%| 4.66| 52.51%|
|CoOp(Variant) |2.39 |59.92%| 2.75 |53.33%|
|w/o context prompt| 2.23| 65.46% |2.76 |67.17%|
|L2RCLIP(Ours)| 2.13| 71.87% |2.62 |67.55%|
The above result have shown that our porposed methods actually help to learn ordinal property. Additionally, we visualize the embedding space by t-SNE. **The results are shown in the .pdf file.**
**Q4: The training detail about final loss used in L2RCLIP**
**[Reply]** Due to page limitations, we have included this section in the supplementary materials. We utilize the cross-modal ordinal pairwise loss $L_{cop}$ and asymmetrical contrastive loss $L_{t2i}$ and $L_{i2t}$ to learn reliable rank prompts. In the second stage, we employ the cross-entropy loss $L_{ce}$ and simplified cross-modal ordinal pairwise loss $L_{scop}$ to fine-tune the image encoder. **Further details can be found in Supp. Line 17-25.**
Every effort has been made to address your comments faithfully in the revised paper. If you have any additional comments, please let us know. Thank you again for your positive and insightful comments. We do appreciate them.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your detailed explanation of my concerns.
I should restate my view that the paper presents a novel approach to ordinal classification and shows promising performance on various tasks. However, the comparison with previous methods seems unfair, and some claims lack supporting evidence. Addressing these concerns and providing a more comprehensive evaluation of the ordinality score would strengthen the paper and justify a higher rating.
Till now, I am satisfied with the authors' reply, apart from:
1) The use of rank token transformer does not guarentee the ordinality. In fact, I personally do not deem the linear comparison to be the true ordinality, since the high-dimensional statistics are highly complex. Therefore, the t-SNE may not be strong enough to support the claim but is suitable for validating the intuition. Anyway, the performance is indeed improved with more computation, so I kindly suggest the authors to rephrase their depiction about the rank token interaction improving ordinality, otherwise I would thought it is over-claimed.
2) It could be better to compare the ordinality score at a small range of value space, instead of merging the ranks.
Thanks again for your informative reply and I would like to here more updates from the authors.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 6B9T
Comment: Thanks for your reply. We would like to clarify the issues as follows.
***
**Q1. Ordinality score and the ordinlaity learned by RankFormer**
**[Reply]** **For ordinality score**, We acknowledge that this metric is not perfect because it is challenging to maintain the linear assumption in a high-dimensional manifold as you suggested. To alleviate this strong assumption, we propose that the locally linear manifold can be preserved within a fixed small window size. Therefore, we calculate the local ordinality score using window sizes of 2, 4, 8, 16, and 32. The results of the local ordinality score are shown in Table 1. The results demonstrate that our L2RCLIP effectively maintains local ordinality within a fixed small window size. **For ordinality learned by RankFormer**. We believe that transformer-based architectures are more capable of modeling the relationships between input tokens. In ordinal classification tasks, the training model may tend to leverage ordinal information as it is a straightforward method to minimize loss. Based on this assumption, we propose a token-wise RankFormer to enhance the ordinality between input rank templates.
We have also compared its performance with an MLP-based architecture to avoid effects driven by extra computation. The results are presented in Table 2. Note that both RankFormer and MLP have similar training parameters. **Finally**, we sincerely appreciate your valuable advice. We will make sure to revise our content regarding the ordinality in the upcoming version to enhance rigor and clarity.
Table 1. The local ordinality score results on the MORPH II dataset.
|#window size| #2| #4| #8 |#16 |#32|
|---|---|---|---|---|---|
|Vanilla CLIP| 100.00% |83.33%| 78.57% |70.83% |60.08%|
|OrdinalCLIP |100.00% |100.00% |100.00% |96.19% |—|
|L2RCLIP(Ours) |100.00% |100.00% |100.00% |100.00% |97.78%|
Table 2. Ablation study on architecture of proposed models.
|Arch. |OS |MAE|
|---|---|---|
|MLP |67.48%| 2.27|
|RankFormer |71.87% |2.13|
**Q2. The merging of ranks**
**[Reply]** We would like to clarify any potential misunderstanding. We do not merge any ranks; instead, we only choose 6/12/20 ranks from the left corner in Fig. 3 for better visualization. Our primary focus is to describe relative ordinality rather than absolute ordinality, so we apply maxmin normalization within each local window.
Please let us know if it addresses your concern. Thank you again for your insightful comments. We do appreciate them. | Summary: In this paper, a novel ordinal regression framework based on CLIP is proposed. The proposed algorithm, which is called L2RCLIP, exploits language priors together with image features.
It encourages that image features at each class locate around the text feature of that class in the embedding space. To this end, L2RCLIP uses RankFormer to obtain text features and CLIP image encoders to obtain image features.
The network is optimized via cross-modal ordinal pairwise loss. Extensive experiments on various ordinal regression tasks show that the proposed algorithm outperforms the previous SOTA, ordinalCLIP.
Strengths: 1. The paper is easy to follow. It describes the proposed algorithm clearly and seems like to be reproducible.
2. The proposed algorithm is simple but technically sound. It also achieves the best scores in most tests.
3. Experiments are diverse and solid enough to evaluate the performances properly. It also provides extensive ablation studies for better understanding on each part of the proposed algorithm
Weaknesses:
* Major
1. It would be better to discuss about the many loss functions in Eq(2)~Eq(6) more deeply. It lacks the how the loss function operates to encourage network training to be performed into the desirable direction.
2. Are global context prompts learnable parameters as well?
3. If so, it would be interesting to see the ablation result for global context prompts. It may be used as the auxiliary role for the language priors, but it may reduce the impact of the rank template encoding.
In such a case, without the global context prompts, the results in Table 6 may be changed meaningfully.
4. MORPH II has 4 widely used evaluation settings. In the paper, the evaluation on the most simple setting is provided only. It would be helpful to compare the performances on the other challenging settings.
* Minor
1. L135, Fig. 3.2 -> Figure 2
2. In Eq (1): z_j -> z_i?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the weakness section for my questions on the paper. In overall, I'm leaning to accept because the proposed algorithm is clearly described, and technically sound. Also, it achieves the good scores on various benchmark tests. However, I will see the other reviewer's opinion and the author response too.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have addressed the potential negative social impact in the main paper but I was not able to find specific discussion about the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer XrDm
We sincerely appreciate your positive review and valuable comments. Please find our responses below.
***
**Q1: To prove the effectiveness of proposed losses**
**[Reply]** Thank you for your suggestion. We provide a detailed analysis of Eq(2) to Eq(6) as follows:
- **Eq(2)->Eq(3)** Compared to images, language contains a higher density of information. Intuitively, the information conveyed by **"a 20-year-old person"** and a large number of images containing 20-year-old people is similar. Therefore, we attempt to combine language features with pairwise loss in a mean form.
- **Eq(3)->Eq(4)** Previous work has addressed the diversity term through meanNN-based entropy estimation.
- **Eq(3)->Eq(5)** The tightness term can be straightforwardly transformed into a loss objective function.
- **Eq(4),Eq(5)->Eq(6)** We introduce a simple distance term to further enhance the ordering relation.
**Q2: The role of global context prompts**
**[Reply]** The results of the related ablation study are presented in Table 1:
**Table 1. Ablation study of global context prompts.**
|Method |Morph (MAE) |Morph (OS) |CACD (MAE) |CACD (OS)|
| ---- | ---- | ---- | ---- | ---- |
|Vanilla CLIP |6.91 |55.36% |4.66 |52.51%|
|CoOp(Variant) |2.39 |59.92% |2.75 |53.33%|
|w/o context prompt |2.23 |65.46% |2.76 |67.17%|
|L2RCLIP(Ours) |2.13 |71.87% |2.62 |67.55%|
**Q3: Results on all settings of Morph II**
**[Reply]** We have conducted experiments on the other three settings of Morph II. The results are presented in Table 2.
**Table 2. Additional results on Morph II**
|Method |SettingA |SettingB |SettingC |SettingD|
| ---- | ---- | ---- | ---- | ---- |
|MWR-G(2022, CVPR) |2.24 |2.55 |2.61 |2.16|
|GOL(2022, NeurlPS) |2.17 |2.60 |**2.51** |2.09|
|L2RCLIP(Ours) |**2.13** |**2.53** |2.56 |**1.95**|
**Q4: Some minor typos error**
**[Reply]** Thank you for pointing out the typos. We have corrected the mentioned errors in our revised paper.
Every attempt has been made to address your comments faithfully in the revised paper. If you have any additional comments, please let us know. Thank you again for your positive and insightful comments. We do appreciate them. | Summary: The paper proposed to leverage vision-and-language models to improve ordinal classification. This is a follow-up work on the previous OrdinalCLIP paper. The major contribution is RankFormer, which is designed to enhance the ordering of the original rank prompts. Also a cross-modal ordinal pairwise loss is proposed to refine the CLIP feature space. Experimental results are presented on three ordinal classification tasks, including facial age estimation, historical color image classification, and aesthetic assessment.
Strengths: The intuition behind the proposed method makes sense to me. The overall idea is simple and straightforward, and should be easy to reproduce. The experimental results seem to be extensive and can demonstrate the effectiveness of the proposed method.
Weaknesses: The following are more detailed comments and suggestions about the paper.
1, In Line 10-11, if the goal is to incorporate language priors, why use the CLIP model? The text encoder in CLIP is not very strong. Existing LLMs can provide much better language priors.
2, The paper claims that the proposed model is designed to learn both semantic and ordering based on Figure 1. It is better to provide some examples or analysis about how the semantic and ordering are learned simultaneously. The visualization in Figure 3 is not very helpful.
3, The paper may want to provide more details about RankFormer in Line 145-152. Why call it token-wise attention? It seems to be the basic attention mechanism. Maybe illustrate the architecture of RankFormer as well.
4, In Figure 2, why fix the CLIP_{Text} encoder but fine tuning the CLIP_{Image} encoder? The “C” in the left part means “concatenation”?
5, In Line 147-149, “k is the length of rank templates”, so k is smaller than M? How to pick k in this formulation?
6, The writing of the paper can be much better:
In Line 149, is the the length of rank templates…
Where is Fig. 3.2 in Line 135?
“k” is used everywhere in the paper: in Line 182, for the number of global context prompts; in Figure 2 for the index of rank templates; in Line 149, for the length of rank templates.
7, In Line 184, the paper proposed to use asymmetrical contrastive loss to handle many-to-many image-text mapping within the batch. It is unclear to me why this asymmetrical loss can better handle many-to-many mapping? Please elaborate more.
8, As a follow-up paper of OrdinalCLIP, the paper is an incremental improvement over OrdinalCLIP, and the novelty of the paper seems to be limited.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer GKDm
We would like to thank the reviewer for the valuable comments. However, we feel there is some misunderstanding. We clarify the issues and address the questions accordingly as described below.
***
**Q1: Choice of using powerful language model**
**[Reply]** That may be a promising research direction to further improve performance in ordinal classification. Due to limited computational resources, we have chosen the CLIP text encoder to provide language priors. Despite this limitation, we have found that CLIP performs well in three tasks of ordinal classification: age estimation, historical color image classification, and aesthetic assessment. Our proposed method has achieved state-of-the-art (SOTA) performance on several test benchmarks. As you suggested, we believe our method can provide new insights for further LLM-based ordinal classification.
**Q2: Validation of learned ordinal property and semantic alignment**
**[Reply]** **For ordinal property**, we use three kinds of method to verify it learned by our proposed modules. **First**, we follow OrdinalCLIP(Li et al, NeurlPS2022) and adopt ordinality score to measure the distance of normalized rank templates quantitatively and qualitatively. We outperform the previous method by over 5.93%. **Second**, we conduct comprehensive ablation study on two different datasets to verify the effectiveness of our proposed method quantitatively. **Finally**, we visualize the embedding space for the ablated methods in the Supp. Fig.4. We think these qualitative and quantitative analysis will support the rank information learned by our proposed method.
**For semantic alignment**, we think the semantic alignment can be measured using metrics such as MAE when CLIP is adopted for classification. Our method performs the best in 15 out of 16 benchmark tests, which can be used to prove that our method use better semantic alignment. Moreover, we conduct additional ablation study on global context prompts to further prove our RankFormer and proposed cross-modal ordinal pairwise loss can achieve both semantic alignment and ordering alignment compared with previous method. The results are shown in Table 1.
Table 1. Ablation study of global context prompts.
|Method |Morph (MAE)| Morph (OS) |CACD (MAE)| CACD (OS)|
|---|---|---|---|---|
|Vanilla CLIP |6.91 |55.36% |4.66 |52.51%|
|CoOp(Variant) |2.39 |59.92% |2.75| 53.33%|
|OrdinalCLIP(w context prompt) |2.32 |65.94% |— |—|
|w/o context prompt |2.23 |65.46% |2.76 |67.17%|
|L2RCLIP(Ours) |2.13 |71.87% |2.62 |67.55%|
**Q3: Explaination of token-wise attention in RankFormer**
**[Reply]** Given an input tensor $x\in R^{M\times N\times C}, the normal attention operates on the second dim while token-wise attention on the first dim since we want to enhance the ordinal property in the vanilla rank prompts. In fact, RankFormer handles three different types of tokens. **First**, for special tokens like [EOS], RankFormer keeps them the same during training. These special tokens are not optimized. **Second**, for normal tokens, RankFormer functions similarly to linear layers. **Lastly**, for rank tokens, RankFormer employs a token-level attention mechanism to further enhance the ordinal property. The detailed architecture of RankFormer will be included in our revised version.
**Q4: Training parameters in CLIP text encoders and image encoders**
**[Reply]** Since we only have coarse rank templates, the performance of text encoders may be significantly degraded after full-parameter fine-tuning. Therefore, we only fine-tune a minimal number of parameters in the text branch. Additionally, "C" represents concatenation, and we will include this correction in our revised framework.
**Q5: Explaination of meaning of notations**
**[Reply]** Apologies for any confusion caused. In fact, $k$ here is usually smaller than the max token length (which is 77 in the most of case) in CLIP. $M$ is the number of rank templates or ordinal categories in ordinal classification, e.g. $M$=101 if the range of age estimation is [0,100]. We choose $k$ based on our pre-defined rank templates. We will exclude special token in practice.
**Q6: Revision of some confusing typos**
**[Reply]** We will ensure to avoid repeated notation in our revised paper. Thanks for your advice.
**Q7: Explaination of asymmetrical loss**
**[Reply]** In contrast to the normal contrastive learning in CLIP, where each image has only one target label, our cases involve images with multiple target labels within a batch. Directly adopting the symmetrical loss used in CLIP would be suboptimal in this scenario. We need to take into account all the target labels in a batch. Therefore, we group the correct matches in the similarity map and compute the loss by taking the mean.
**Q8: Limited novelty compared with OrdinalCLIP**
**[Reply]** We do not agree. The analysis is as follows:
Both OrdinalCLIP and our L2RCLIP aim to leverage language priors for the ordinal classification task. The main difference lies in the design of the ranking mechanism. OrdinalCLIP manually designs interpolation rules and applies interpolation to a few learnable rank prompts, achieving good results on certain test benchmarks.
However, we argue that this explicit interpolation may involve a tradeoff between semantic alignment and ordering alignment, as the interpolated result may not guarantee correct semantic alignment. In response to this, we have designed a new token-wise RankFormer and a novel cross-modal ordinal pairwise loss. This allows us to learn a more complex ranking mechanism while preserving semantic alignment. Our method has achieved SOTA) performance. Overall, our L2RCLIP outperforms in 15 out of 16 benchmark tests.
Every attempt has been made to address your comments faithfully in the revised paper. If you have any additional comments, please let us know. Thank you again for your valuable comments. We do appreciate them.
---
Rebuttal 2:
Title: Looking forward to the response from Reviewer yU8S
Comment: Dear Reviewer yU8S,
We have tried our best to address all the concerns and provided as much evidence as possible. May we know if our rebuttals answer all your questions? We truly appreciate it.
Best regards,
Author #3203
---
Rebuttal Comment 2.1:
Comment: Thank the authors for answering my questions during rebuttal. Most of my questions have been addressed during rebuttal. I will increase my rating of the paper to borderline accept.
---
Reply to Comment 2.1.1:
Title: Response to Reviewer yU8S
Comment: Dear Reviewer yU8S,
We greatly appreciate that you will increase the score of our article. This response is just to **remind you that you may have forgotten to change the score in the OpenReview system**. We truly appreciate it.
Best regards,
Author #3203 | Rebuttal 1:
Rebuttal: # Response to All Reviewers
Thank you for your valueble review and insightful suggestions. **The .pdf file includes extra figures. Please download it if needed.**
We have made every attempt to address your comments in the revised manuscript and hope that you find this revision satisfactory. If you have additional concerns, please let us know. We do appreciate them.
Pdf: /pdf/82d6056e239736548b50ac8314c6580e87f39531.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes L2RCLIP, which features two modules for ordinal classification with vision-language models, i.e., CLIP. The first is a token-wise attention module called RankFormer to tune the rank prompts. And the second is a pairwise ordinal loss to inject rank information into the supervision. Synergically, the two modules achieves competitive performance across age estimation, aesthetics assessment and historical image dating benchmarks, as well as improvements in few-shot and distribution shift experiments.
Strengths: 1. The proposed two modules are simple and effective in improving ordinal classification ability of CLIP models.
2. The token-wise attention in prompt tuning is interesting.
3. The writing of this paper is of good quality.
Weaknesses: 1. Considering rank information in the loss has been a common practice in ordinal classification methods as depicted in the related works (line80-line92). It is unclear how language priors are applied in the proposed pairwise loss (Eq.6), and thus distinguish this loss from existing methods.
2. Although the token-wise attention is new and intuitively incorporates the information of different rank prompts, the ordinal properties with strictly ordered ranks are not assured in the attention process.
3. Some definition of variables may cause confusion, e.g., The ‘T’s in Eq.3,4,5 represent the output embeddings of text encoder while the ‘T’s in Figure2(a) seems representing the prompt embeddings. And it’s confusing that whether ‘k’s in line147 and line149 are for the same thing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The authors might give qualitative or quantitative analysis of the rank prompts to support that rank information is learned and preserved by RankFormer.
2. The authors might give more elaboration on the synergy of the two proposed modules in Table 5, e.g., explain what necessitates the use of a pairwise loss to unleash the power of RankFormer and how / whether the pairwise loss can be regarded as an indispensable part of the RankFormer?
3. Some details need the authors’ clarification:
1) Why OrdinalCLIP does not preserve semantic alignment as it has also learned a set of context prompts other than the rank prompts?
2) As the rand templates in an Mxkxc tensor in the token-wise attention (line147), does the attention operate on the 1st dimension (M) or on the 2nd dimension (k)?
3) What does “language-related parameters frozen” mean in line 178? Does it mean the prompting part (including context and rank prompts) is frozen or only the text encoder is frozen.
4) As Lscop is a simplified special case of Lcop, how would the two co-exist in Table 5.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer GKDm
We thank the reviewer for the valuable feedback and a positive assessment of our work. We are happy the reviewer finds the paper well-organised and our method interesting, valuable, and innovative with good performance. Below we detail our response to the review concerns.
***
**Q1: The qualitative and quantitative analysis of rank information**
**[Reply]** We use three kinds of method to verify the rank information learned by our proposed modules. **First**, we follow OrdinalCLIP(Li et al, NeurlPS2022) and adopt **ordinality score** to measure the distance of normalized rank templates quantitatively and qualitatively. We outperform the previous method by over 5.93%. **Second**, we conduct **comprehensive ablation study** on two different datasets to verify the effectiveness of our proposed method. **Finally**, we **visualize the embedding space** for the ablated methods in the Supp. Fig.4. We think these qualitative and quantitative analysis will support the rank information learned by our proposed method.
**Q2: Explaination of proposed two modules**
**[Reply]** We aim to improve the performance of ordinal classification by focusing on two key aspects.
**First**, through our experiments, we have observed that vanilla text prompts already possess a certain degree of ordinal property. Building upon this observation, we have designed a **token-wise RankFormer module** to further enhance the ordering alignment within these prompts. This module specifically focuses on capturing and reinforcing the correct ordering relationships between different tokens. **Second**, taking inspiration from previous work on metric learning, we have **incorporated language knowledge** into a lower bound of cross-entropy loss. Additionally, we have introduced **an additional distance weighting term** to effectively model the embedding space with better ordering alignment. This helps to ensure that the learned representations exhibit the desired ordinal properties.
From the perspective of experiment, we have conducted a comprehensive ablation study. This study allows us to individually assess the impact of each module and examine how their effects can be combined. Furthermore, we have visualized the embedding space of each ablated model, providing additional insights into the behavior and performance of our proposed approach. These visualizations can be found in Supp. Fig. 4.
**Q3.1: The semantic alignment in OrdinalCLIP by context prompts**
**[Reply]** We understand your points regarding the semantic alignment of OrdinalCLIP. We agree that context prompts can enhance semantic alignment, as evidenced by previous works like CoOp and other prompt tuning methods.
However, we would like to emphasize two important aspects. **First**, rule-based interpolation does not guarantee that the interpolated results will always adhere to the correct semantic alignment. This can potentially lead to suboptimal performance on downstream tasks.
**Second**, we consider semantic alignment can be measured using metrics such as MAE when CLIP is adopted for classification. Our methods not only demonstrate better performance on several benchmark tests but also, as shown in Table 1, exhibit promising results in both semantic alignment and ordering alignment, even without the use of global context prompts.
Table 1. Ablation study of global context prompts.
|Method |Morph (MAE) |Morph (OS) |CACD (MAE) |CACD (OS)|
|---|---|---|---|---|
|Vanilla CLIP |6.91 |55.36% |4.66 |52.51%|
|CoOp(Variant) |2.39 |59.92% |2.75 |53.33%|
|OrdinalCLIP(w context prompt) |2.32 |65.94% |— |—|
|w/o context prompt |2.23 |65.46% |2.76 |67.17%|
|L2RCLIP(Ours) |2.13 |71.87% |2.62 |67.55%|
**Q3.2: Token-wise attention in RankFormer**
**[Reply]** The attention mechanism in RankFormer operates on the first dimension. RankFormer handles three different types of tokens.
**First**, for special tokens like [EOS], RankFormer keeps them the same during training. These special tokens are not optimized.
**Second**, for normal tokens, RankFormer functions similarly to linear layers.
**Lastly**, for rank tokens, RankFormer employs a token-level attention mechanism to further enhance the ordinal property.
**Q3.3&Q3.4 : Training detail about language-related parameters and $L_{cop}$/$L_{scop}$**
**[Reply]** We fix the text encoder and image encoder and only train global context prompts and RankFormer in the first stage and only finetune the image encoder in the second stage. As you suggested, $L_{scop}$ is the special case of $L_{cop}$. We firstly use $L_{cop}$, $L_{t2i}$ and L_{i2t} to learn reliable rank prompts. Then, we use $L_{scop}$ and cross-entropy loss $L_{ce}$ to finetune the image encoder for better performance. **See more details in Supp Line 17-25.** In the second stage, all language-related parameters are frozen. So entropy estimation of last term in Eq(4) can be ignored, which is equivalent to setting $\lambda =0$. Hope the misunderstanding can be cleared by our explanation.
**W1: The application of lanuage prior and novelty compare with previous method**
**[Reply]** As far as we are aware, there have been limited studies that specifically focus on incorporating language priors into metric learning techniques. Compared with language-powered method, e.g. OrdinalCLIP, we propose RankFormer and cross-modal ordinal pairwise loss to boost performance of ordinal classification and achieve the SOTA performance on several test benchmarks.
**W2: The learned ordinal properties by RankFormer**
**[Reply]** We explain W2 carefully in Q1. Please refer to Q1 for more details.
**W3: Some typos mistakes**
**[Reply]** We will ensure to avoid repeated notation and correct any mistakes in our revised paper.
Every attempt has been made to address your comments faithfully in the revised paper. If you have any additional comments, please let us know. Thank you again for your positive and insightful comments. We do appreciate them.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks for your response. This addresses my questions. | Summary: This paper proposes a language-driven ordering alignment method for ordinal classification. For the language prompt, this paper introduces the RankFormer, which uses Transformer to learn token-wise attention over a set of rank templates. For the loss function, this paper presents a cross-modal ordinal pairwise loss under the pairwise cross-entropy loss formulation. Experimental results show the effectiveness of the proposed method.
Strengths: 1. Considering language prior to ordinal regression is a promising direction.
2. New SOTA is achieved as shown in the experiments.
Weaknesses: 1. The design of RankFormer doesn’t make any sense. RankFormer is essentially learning R’ = f_W(R), where W and R are learnable parameters. As R’ is only conditional on W and R, learning R and R’ are completely equivalent mathematically. Any R’ learned by the proposed method is in the solution space of R. They are not fundamentally different, and any difference in performance between the two could be due to the randomness of the network.
2. It’s unclear what is the total loss used in this paper.
3. In Line 178, the meaning of “To further refine the CLIP feature space, we also propose a simplified cross-modal ordinal pairwise loss L_scop with language-related parameters frozen.” is unclear. What’s the purpose of L_scop? There is no significant difference between L_cop and L_scop. Why can’t merge them and simply use one Loss? Mathematically, you can achieve the same effect by changing the value of \lambda.
4. Line 187: “Many to many image-text mappings within a batch”. No, it’s one-to-many mapping. For each image, it only has one label. For each category, there may exist multiple hits.
5. Some results are confusing. In Table 2, the variance of L2RCLIP in terms of Accuracy is the highest (7.2) while the MAE variance is relatively low (0.05).
6. There are many inconsistent in the formulation and equations. For example, in line 147, M represents the class numbers and k is the length of rank templates. But in Eq 2, K is the class number and k is the class index. Again, in Line 182, k becomes the length of global context prompts.
7. The equations from (3)-(6) are pretty messy. The sign in Eq 4 is wrong.
8. Typos: Line 135, there is no Fig. 3.2.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The key design has flaws. There are many typos and errors. This paper is highly unpolished.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer VQC9
We appreciate the reviewer's insightful comments. However, there seems to be some misunderstanding. We would like to clarify the issues and address the questions as follows.
**Q1: The design of RankFormer.**
**[Reply]** First, we want to clarify the misunderstanding regarding the training parameters in RankFormer. As described in Line 145-147, we **fix the parameters of rank prompts, with the $W$ being learnable while the $R$ is fixed**. Second, our ablation study demonstrates that our RankFormer outperforms the baseline when we make $R$ learnable parameters.
**Q2: The total loss used during training.**
**[Reply]** Due to page limitations, we have included this section in the supplementary materials. We utilize the cross-modal ordinal pairwise loss $L_{cop}$ and asymmetrical contrastive loss $L_{t2i}$ and $L_{i2t}$ to learn reliable rank prompts. In the second stage, we employ the cross-entropy loss $L_{ce}$ and simplified cross-modal ordinal pairwise loss $L_{scop}$ to fine-tune the image encoder. Further details can be found in Supp. Line 17-25.
**Q3: Our setting for $L_{cop}$ and $L_{scop}$.**
**[Reply]** When finetuning the image encoders, we fix the learned rank prompts so that the entropy estimation of these text embeddings can be ignored, which is equivalent to setting $\lambda=0$. It is worth noting that $L_{scop}$ is indeed a special case of $L_{cop}$, as you have suggested.
**Q4: Many-to-many mapping.**
**[Reply]** There seems to be a misunderstanding. We are employing pairwise contrastive learning, similar to CLIP. This means that the <category, image> and <image, category> pairs should have a similar many-to-many relationship from a row-wise or column-wise perspective. Thus, both the category and image may exist multiple hits.
**Q5: Check the result on Adience dataset.**
**[Reply]** We apologize for the oversight. We have rechecked the test pipeline for the Adience dataset, and the accurate results are as follows:
| L2RCLIP(Ours) | Total | 5-fold 01 | 5-fold 02 | 5-fold 03 | 5-fold 04 | 5-fold 05 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| MAE | 0.36±0.05 | 0.39 | 0.28 | 0.39 | 0.35 | 0.42 |
| Accuracy | 66.2%±4.4% | 64.8% | 73.5% | 64.6% | 68.0% | 60.1% |
**Q6-8: The typos, formula and notation.**
**[Reply]** We will ensure to avoid repeated notation and correct any mistakes in our revised paper.
We have made every effort to address your comments faithfully in the revised paper. If you have any additional comments, please let us know. Thank you once again for your constructive feedback. We truly appreciate it.
---
Rebuttal 2:
Title: Looking forward to the response from Reviewer VQC9
Comment: Dear Reviewer VQC9,
We have tried our best to address all the concerns and provided as much evidence as possible. May we know if our rebuttals answer all your questions? We truly appreciate it.
Best regards,
Author #3203
---
Rebuttal Comment 2.1:
Comment: Q1: What do you mean R is fixed? So R is randomly initialized and then fixed for the whole pipeline? Or do you choose a two-stage framework to learn f_W(R) (one for R and one for W) even though this method already uses a two-stage framework to learn text and image separately?
Q2: It's still unclear for readers. Why do the two stages have different losses? Why does stage 1 uses $L_{i2t}$ and $L_{t2i}$ while stage 2 uses $L_{ce}$? How $\lambda $ is chosen for $L_{cop}$? There is no theoretical derivation or explanation here.
One more question here. since $L_{cop}$ and $L_{scop}$ correspond to the loss functions of the two stages, respectively. So what's the setting in Table 5 when only one of these two losses is used?
Many details about understanding methods and code implementation are missing or confusing, and there are errors and inconsistencies in the Equations, all of which give the impression that the submission was rushed through without careful polishing.
---
Reply to Comment 2.1.1:
Title: Response to Reviewer VQC9
Comment: Thanks for your reply. We would like to clarify the issues as follows.
***
**Q1: Further explanation of R.**
**[Reply]** In fact, R is initialized by tokenized rank templates (e.g. A photo of {age} years old face) and is fixed for the whole pipeline. The detailed initialization process is as follows. Suppose we have a rank template like “A photo of 21 years old face”. We first tokenize it into input ids by byte-level BPE, and then map the input ids into embedding vectors. **Then, we use these vectors to initialize the R and then fix it during the whole two-stage training.** We have also conducted the **initialization ablation study** in Table 6 of our manuscript to verify that our methods are robust to the diverse initialization strategy.
**Q2: Further explanation of losses.**
**[Reply]** **Reasons to use different losses.** In the first stage, text features exhibit semantic alignment with image features but lack satisfactory ordinal alignment with other text features. To enhance the ordering relationship of different text features, we employ RankFormer and the $L_{cor}$ loss. To preserve semantic alignment and prevent its destruction, we follow CLIP and use a variant of the contrastive loss, i.e., $L_{t2i}$ and $L_{i2t}$. In the second stage, as we observe promising ordinal alignment in the text features, we fix them and use our proposed $L_{scor}$ loss along with the standard classification loss, $L_{ce}$, to finetune the image encoder.
**The connection of $L_{t2i} / L_{i2t}$ and $L_{ce}$.** As you have suggested, we agree that there is no obvious difference of $L_{t2i}/L_{i2t}$ and $L_{ce}$. Both $L_{t2i}/L_{i2t}$ and $L_{ce}$ are likely to work in our settings. We default to using $L_{t2i}/L_{i2t}$ for text-image contrastive learning and $L_{ce}$ for the classification task.
**The choice of $\lambda$.** As deduced in Eq 6, $\lambda$ should be set to $1$ in the $L_{cor}$. However, during the second-stage training, we freeze the text branch, causing the term in Eq 6 (i.e. $\lambda T_{y_i}^\top T_j$) to provide no gradient. Consequently, setting $\lambda=0$ is equivalent to this scenario. We will revise this part to enhance clarity in our revised version.
**Q3: The ablation study for $L_{cor}$ and $L_{scor}$.**
**[Reply]** We list all the experiment settings in ablation study (Table 5 in our manuscript). All experiments follow a two-stage training approach. Rank prompts are initialized as described in Q1, while context prompts are initialized randomly. We denote $L_{t2i}/L_{i2t}$ as **I-A**, $L_{cor}$ as **I-B**, $L_{ce}$ as **II-A**, and $L_{scor}$ as **II-B**. The ablation study for $L_{cor}$ and $L_{scor}$ is highlighted in bold. We will provide ablation details in the revised version of our manuscript.
**Table 1. The training setting in ablation study**
|Ablation | Setting 0 | Setting 1 | Setting 2 | Setting 3 | Setting 4 | Setting 5 | Setting 6 | Setting 7 |
|--- | --- | --- | --- | --- | --- | --- | --- | --- |
|Rank prompts| Learnable | RankFormer | **Learnable** | **Learnable** | **Learnable** | **RankFormer** | **RankFormer** | **RankFormer** |
|Context prompts| Learnable | Learnable | **Learnable** | **Learnable** | **Learnable** | **Learnable** | **Learnable** | **Learnable** |
|Loss in Stage I| I-A | I-A | **I-A,I-B** | **I-A** | **I-A,I-B** | **I-A,I-B** | **I-A** | **I-A,I-B** |
|Loss in Stage II| II-A | II-A | **II-A** | **II-A,II-B** | **II-A,II-B** | **II-A** | **II-A,II-B** | **II-A,II-B** |
We sincerely appreciate your response. We have made every effort to address your concerns, and we commit to including more details and code implementations in our revised version. Additionally, we will carefully address any typos and other errors.
Please let us know if it addresses your concerns. Thank you again for your insightful comments. We truly appreciate them. | null | null | null | null |
Have it your way: Individualized Privacy Assignment for DP-SGD | Accept (poster) | Summary: This paper designs variants of differentially private SGD to satisfy different privacy expectations, e.g., users can choose one from high, medium, or low levels of privacy. There are two variants of DP-SGD, one changes the sampling probability of different groups, and the other changes the clipping threshold of different groups.
Strengths: 1. The Sample and Scale methods are easy-to-implement. Their design aligns nicely with the fact that DP-SGD adds noise to the aggregated gradient, instead of separate noise for individual gradients.
2. Experiments on several common vision datasets show that individualized DP-SGD is better than DP-SGD with a uniform (the smallest) privacy parameter. The authors also run membership inference attacks to show that groups with smaller privacy parameters do have better empirical privacy.
3. The paper is very well written and easy to follow.
Weaknesses:
1. The privacy expectations in the current version are independent of model accuracy. In practice, the minority groups, whose accuracy is usually worse, may prefer stronger privacy than other groups. What would happen if the group with the worst accuracy has the strongest/weakest privacy expectation? One way to implement this is to extend the experiments in Appendix D.2 by assigning a larger/smaller privacy parameter to the class with the worst non-private accuracy.
2. The privacy expectation itself may be data dependent and hence the individualized privacy assignment cannot be made public, as the authors discussed in Line 68 – Line 73. For example, in a medical dataset, users diagnosed with certain diseases may have higher privacy expectations.
3. The authors use a shallow convolutional network for the experiments. It would be better to see whether the results still hold for deep learning models, e.g., ResNets or transformers.
4. The authors only report the average accuracy. It would be better if the authors could show the accuracy of different groups. A recent line of work shows that DP has disparate impact on model accuracy [1]. Will the groups with smaller privacy assignments lose more accuracy?
[1]: Differential Privacy Has Disparate Impact on Model Accuracy. https://arxiv.org/abs/1905.12101
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See Weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations:
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and provide our responses below.
> **W1: Influence on utility for underprivileged groups**
We already present such a result in Table 8 in our submission. We selected the CIFAR10 dataset and assigned lower or higher privacy budget to one of the classes. Class 3 can be considered as an underprivileged group since it has the lowest accuracy across all classes of only 32.64%. If the privacy budget for this class is decreased further to eps=1 while the other classes maintain eps=2, the accuracy for class 3 plummets to only 6.67%, while the accuracy for other classes increases. In contrast, when a higher privacy budget of eps=3 is assigned to class 3, then its accuracy increases substantially and is comparable with accuracy for the remaining classes.
> **W2: Confidentiality of the privacy values**
There is no need to reveal the privacy assignments and the end user of the released model should be oblivious to the training data points’ individual privacy preferences. In the general response, we detail how the fact that end users can only interact with the trained model preserves confidentiality of the training data points’ privacy preferences.
> **W3: Extending experimentation to ResNet and Transformers**
We thank the reviewer for their suggestion and we extended our experimentation. Among other experiments (see general response), we
- trained a ResNet18 from scratch with CIFAR10
- and fine-tuned a BERT transformer (bert-base-uncased) on the SNLI dataset for natural language inference (NLI).
A summary of the results is summarizes in this table for convenience:
| Model | Dataset | DP-SGD (eps=5) | Sample (eps=5,7.5,10) | Scale (eps=5,7.5,10) |
|----------|---------|----------------|--------------------|-------------------|
| ResNet18 | CIFAR10 | 47.52+-0.84 | 48.52+-0.69 | 48.77+-0.73 |
| BERT | SNLI | 75.91+-0.23 | 76.11+-0.21 | 76.5+-0.17 |
In both experiments, our results show that the Scale and Sample methods again outperform standard DP-SGD. See the general response for more experimental details.
> **W4: Per privacy-subgroup utility**
We show the accuracy for different subgroups with their different privacy budgets in Table 8. We consider each subgroup to be a different class. We consider the test data to divide into the same per-class subgroups as the training data. This enables us to show a per-group (class) test accuracy under the given different privacy budgets. The observed trend indicates that if a group is willing to sacrifice more privacy, it will gain higher utility of the model whereas for groups with stronger privacy preferences, the opposite is the case.
We would also like to clarify that in the main paper, for example Table 2, we, again report the test accuracies. Test data points do not get the possibility to specify any privacy budget (since the model only predicts on them and does not train on them, their privacy is not leaked anyways). For the training data points in the main paper, we assigned their privacy budget at random during training. Hence, there is no mapping to what subgroup of the test data would correspond to what subgroup of the training data. As a result, when reporting test accuracies in this setup, it is impossible to show per-group test accuracies.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Regards W1, it may be better to clarify that Class 0 is the underprivileged group in Appendix D.2.
After reading the response and reviews from other reviewers, I decided to maintain my positive score.
---
Reply to Comment 1.1.1:
Title: Thank you for the feedback
Comment: We thank the reviewer for the positive feedback and maintaining the score. We will extend the description in Appendix D.2 and include the term "the underprivileged group". | Summary: This paper proposes extensions of the DP-SGD algorithm to support individualized differential privacy (called the IDP-SGD approach). Unlike traditional differential privacy, which imposes a single privacy budget epsilon to all data points, the data points may now have different privacy budgets. Two extensions are proposed. First, the Sample method adjust the sampling rates of data points depending on their privacy budgets. Second, the Scale method adjusts the individual clip norms to effectively add individualized noise. The noise itself cannot be directly individualized due to the DP-SGD algorithm behavior of adding per-batch noise. Experiments show that IDP-SGD outperforms DP-SGD and IPATE in terms of utility and can be complemented with privacy accounting techniques.
Strengths: * Proposes novel extensions of DP-SGD to support individualized privacy.
* The Sample and Scale methods are easy to deploy and have theoretical guarantees.
* Experiments show that IDP-SGD outperforms baselines (DP-SGD and I-PATE) and can be used with privacy accounting.
Weaknesses: W1) The core contributions of the paper are not contained in the main part of the paper.
* The theoretical analysis should be more specific because the proposed individual privacy notion with the extended $\delta$ term is different from previous personalized DP and conventional DP. In the proof of Theorem G.1., the original SGM seems to be about conventional DP and RDP, but the author simply applied it to individual privacy without specific proof about its properties.
* The paper shows the algorithm that finds parameters, but it does not provide a way to adapt the parameter to the original DP-SGD. Because DP-SGD uses a single sample rate and a single clip norm, adapting multiple sampling rate and clipping norm is also a new mechanism. It seems that a more specific explanation is needed in terms of implementation. For example, the original DP-SGD assumes Poisson sampling in principle, but approximates it due to computational cost. However, this is possible because the sampling probability is fixed, and it seems necessary to explain how to transform this to satisfy independent sampling.
* It is difficult to know the exact process of the algorithm. Algorithm 3 is a very important part of the paper, but it is difficult to understand properly because it is omitted in the main part.
W2) It is not clear how theoretically optimal Sample and Scale are. Looking at Table 1, $\epsilon_p$ of Sample and Scale could have a loose bound depending on the $\sigma_{\mathrm{SAMPLE}}$ and $\sigma_p$ values, respectively. It would be helpful to actually see what these values are in the experiments.
W3) In Algorithm 1, the paper says that getSampleRate is equivalent to the Opacus function get_noise_multiplier without a detailed description, but the two functions seem to return different objects. The function getSampleRate returns a sample rate, but the Opacus function get_noise_multiplier returns noise_mutiplier $\sigma$.
W4) The comparison between Sample and Scale is not clear enough. The two methods are introduced and only empirically compared, and it seems that Sample mostly dominates Scale (according to Table 2). Looking at the overall algorithm, both techniques look similar in terms of efficiency. Then when should one use Scale?
W5) Also, a natural approach seems to be combining Sample and Scale where the sampling and clip norms are both adjusted. Why not use this combination?
W6) The notion of individualized privacy ($\epsilon_p$, $\delta$)-DP seems to be confused with the original ($\epsilon$, $\delta$)-DP. This confusion leads to other confusions, e.g., whether ($\epsilon_1$, $\delta$)-DP in Theorem 3.1. and ($\epsilon_P$, $\delta$)-DP in Theorem 3.2. mean individual privacy or conventional DP.
W7) The experimental setup of only using up to three privacy budgets seems limiting. What happens if there are many more than three budgets?
W8) Some reference number is wrong. For example, in the last paragraph of the proof of Theorem G.1, Mironov et al. is [21] or [22], not [20].
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Q1) What is the actual name of ({$\epsilon_1$,...,$\epsilon_n$}, $\delta$)-DP and ($\epsilon_p$, $\delta$)-DP? Are they Individualized Privacy or something else? If they are Individualized Privacy, why did the authors change the term “Personalized DP” to “Individualized Privacy” even though you referred to the Personalized DP paper?
Q2) How can getSampleRate function be equivalent to the function get_noise_multiplier in Opacus? getSampleRate seems to return sample rate with input argument noise multiplier, while get_noise_multiplier returns noise_multiplier sigma.
Q3) The comparison between DP-SGD and the proposed algorithms is as expected because DP-SGD assumes the same privacy level over all data points. Are there any results related to computational cost?
Q4) A detailed description of the properties of each Sample and Scale is lacking. Sample seems to be superior overall in performance. Is there any reason to introduce the Scale technique? Or, what are the advantages of the Scale method compared to the Sample method?
Q5) What is Thm. 4 and Thm. 11 from Mironov et al. [21] in the proof of Theorem G.1.? I could only find Proposition and Lemma in the paper. Did you mean [22]?
Q6) How can you simply prove Theorem G.1. and Theorem G.2. without proving some basic theorems or properties of newly designed privacy notion? I don't think previous DP techniques can be applied to individual privacy directly. The authors just directly use the conversion technique from RDP to ($\epsilon$,$\delta$)-DP of previous works as a technique for the individual privacy conversion even though the privacy notion is new.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors discuss limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1 & Q6: Application of SGM theorems to IDP**
The proof in G.1 showing that the entire mechanism satisfies $(\{\varepsilon_1, \varepsilon_2, \dots, \varepsilon_P\}, \delta)$-IDP is built on the observation that our methods can be considered as $P$ simultaneously executed SGMs that update the same model: The privacy groups divide our training data in $P$ disjoint subsets. Each of the $P$ SGMs operates on a different partition of the training data, using different individual sample rates or clip norms.
Given that we perform the privacy accounting also on a per-group basis, each privacy group is just an original SGM (with different privacy parameters). Hence, the original SGM theorems apply.
> **W1: Implementation of IDP-SGD**
We build our implementation on top of Opacus.
**Code flow**: We extend the PrivacyEngine to additionally take per-point budgets, i.e., an epsilon value for each training data point and to derive the adequate privacy parameter for each point (sample rates/clip norms). We also added an IndexedDataset to Opacus to be able to refer to individual data points via their index.
**Individual sample rates**: We implement a data loader based on a custom weighted sampler. Our sampler adapts the functionality of Opacus’ UniformWithReplacementSampler and also uses the torch.rand function. In contrast to the UniformWithReplacementSampler that compares the generated random values against a uniform threshold to determine which data points will be in the current mini-batch, we compare point-wise, and each data point has an individual threshold that depends on its privacy assignment.
**Individual clip norms**: The standard private optimizer in Opacus separately clips the gradient of each data point within a mini-batch. For our Scale, during clipping, each data point is clipped according to their privacy budget (with respective clip norm).
The above description is reflected in the code attached to this submission.
> **W1: Algorithm 3**
We made space for Algorithm 3 in the main paper by merging Figure 3a) and 3b).
> **W2: Optimality of individual privacy parameters**
Our code implements Algorithms 1 and 2 to derive privacy parameters such that all privacy groups’ budgets will be exhausted at the end of training. Copying the per-group final privacy consumption from the log file of a training with CIFAR10, first privacy setup, Table 2 we see that we achieve `Sample: [0.997, 1.995, 2.992]`, and `Scale: [0.998, 1.995, 2.994]` in practice with eps=1,2,3 specified. I.e., the actual privacy consumption is tight. When adjusting the precision parameter in our calculation of privacy parameters from 0.0001 (displayed results) to e.g. 0.00001, values even closer to 1,2,3 can be achieved.
> **W3&Q2: get_noise_multiplier vs. get_sample_rates**
We meant that these functions are *conceptually* equivalent. They both rely on an interactive process where one privacy parameter is to be found in a binary-search style while the others are fixed.
> **W4&Q4: Comparing Sample and Scale**
We report the two best performing methods in the main paper (the other methods are in Appendix E).
We observe two scenarios in which Scale is superior to Sample. (1) For full-batch gradient descent [9], Scale can be used while Sample is not applicable, given that all data points are sampled. (2) For extremely large dataset sizes (e.g. >500k), and very small mini-batch sizes (e.g. 32), due to rounding of the extremely small sample rates, the rates for two different privacy groups can become very similar or the same, preventing a fine-grained individualized privacy. This does not happen for Scale, where the clip norms are independent from the number of data points. The results in the general response show that, e.g., on BERT for SNLI Scale outperforms Sample.
> **W5: Combining Sample and Scale**
The combination of Sample and Scale extends the number of hyperparameters and potential to find better performing algorithms. It can be implemented as follows: obtain individual sampling probabilities as in SAMPLE. Once a mini-batch is sampled, use the respective data points’ indices and clip their gradients as in SCALE.
> **W6: Notation for IDP**
We would like to clarify that ($\epsilon_1$, $\delta$)-DP in Theorem 3.1. and ($\epsilon_P$, $\delta$)-DP in Theorem 3.2. mean conventional DP. Semantically, $\epsilon_P$, $\delta$-DP expresses that the epsilon value $\epsilon=\epsilon_p$ is equal to the privacy budget of the group with the highest privacy budget from individualized DP.
To avoid confusion with the notation, we updated the paper to use the abbreviation $(\{\epsilon_1, \epsilon_2, \dots, \epsilon_P\}, \delta)$-**I**DP or ($\epsilon_P$, $\delta$)-**I**DP when we refer to individualized privacy, and ($\epsilon{(_P)}$, $\delta$)-DP for standard DP.
> **W7: More privacy groups than 3**
We already present such a result in Figures 5, 6, and 7. In these setups, we have *10* different privacy groups. To show the flexibility of our methods, we ran additional experiments on the MNIST dataset, using 100 evenly sized privacy groups with budgets [1, 1.05, 1.1, ..., 5.9, 5.95] and report the results in the general response.
> **Q1: Naming “Individualized”-DP**
There are different names for this concept. We also cite [2] who refer to the concept as heterogeneous differential privacy. We use Individualized DP-SGD following the naming of Individualized PATE [5].
> **Q3: Computational costs**
We performed experiments measuring runtimes. The detailed results can be found in the general answer. We observe that Sample and Scale do not significantly increase runtime in comparison to standard DP-SGD with (406.33 sec for Sample, 414.33 for Scale vs. 403.67 for DP-SGD).
> **W8&Q5: References**
Yes, we meant [22]. The full paper in the supplementary material accidentally is missing citation [7] from the submission. We fixed all the citations for the updated version of our paper.
---
Rebuttal Comment 1.1:
Title: Sharing Additional Results Regarding the Combination of Sample and Scale
Comment: To further illustrate how we addressed the point about combining our Sample and Scale methods, we implemented the combination that we described in the rebuttal.
We allow to weight the different methods to arbitrary fractions (e.g., 50% Sample, 50% Scale), and derive the privacy parameters such that all privacy groups exhaust their respective budgets at the end of the specified number of iterations.
We ran additional experiments to showcase the advantage of combining both methods. We used the MNIST dataset and the first privacy setup (0.34, 0.43, 0.23) with privacy budgets {1,2,3} and weighted our methods as indicated in the table below.
| Sample (weight in %) | Scale (weight in %) | Test Accuracy (in %) |
|------------|-----------|---------------|
| 0 | 100 | 97.78+-0.08 |
| 25 | 75 | 97.75+-0.10 |
| 50 | 50 | 97.80+-0.10 |
| 75 | 25 | 97.80+-0.10 |
| 100 | 0 | 97.81+-0.09 |
The baseline (first and last rows) are taken from Table 2 in the original submission, while the additional three rows in between with the new results represent average test accuracy and the standard deviation over ten independent random runs over different weights (w) assigned to the respective method..
Our results demonstrate a clear progression from the lower performing Scale approach to the better performing Sample. This aligns with expectations based on our implementation, where we combine the methods based on the parameters found for them sequentially.
Note that each of our methods has its advantages and disadvantages and the combination of both methods provides more options to balance (dis)advantages. For example, when some points’ sample rates are extremely small, then the model might never see those points during training and the combined method could reduce this possibility while still improving over the Scale-only method. | Summary: This paper proposed two variants of DP-SGD by manipulating the sampling rate and the gradient clipping bound for different groups to achieve the goal of having different privacy budgets for those groups and improving the overall performance of DP-SGD. The authors proved theoretical privacy guarantees for both variants. They also conducted an extensive experimental evaluation to showcase the advantage of their methods in many aspects.
Strengths: 1. The authors considered an important problem, having different privacy budgets for different groups of people to boost the overall utility, and proposed two algorithms to achieve the goal. The privacy guarantee of the algorithms is proved. Both algorithms are easy to implement and look novel although intuitive to me.
2. The experiments in this paper are adequate for the evaluation of the proposed methods. The results are well-organized and visualized, which supports the validity and the advantage of the methods, and the details are provided enough for reproducibility. I understand and appreciate that the authors put most of the valuable experimental results in the appendix due to the page limitation.
3. The writing and the organization of the content are good.
Weaknesses: 1. The composition theorem for Algorithms 1 and 2 is not proved or discussed in this paper. The proofs of Theorem G.1 and G.2 look okay to me although I think more explanations would make it more friendly to the general audience. However, I think there is still a need to show the composition theorem similar to the results in Feldman and Zrnic [9]. Moreover, if it is not much work for rewriting related parts of the paper, I think a more strict way to account for the privacy budget usage is by the Rényi Filter [9], i.e., individual RDP instead of RDP.
2. Section 4.2 uses membership inference to empirically verify the privacy protection for the claimed DP guarantee, but the results in Figure 2 do not explicitly show the protection separately for each group ($\varepsilon=10$ and $\varepsilon=20$) compared with the DP guarantee. The authors may show the curves under two additional settings: vanilla DP-SGD with $\varepsilon=10$, and vanilla DP-SGD with $\varepsilon=20$. Then compare the two solid curves in Figure 2 with the two additional curves respectively.
3. Both Algorithms 1 and 2 will potentially change the objective function of SGD (although DP-SGD with the gradient clipping has already changed it). There is no discussion on how specific choices of privacy budget distribution will affect the convergence and final performance of the optimization. This is important since it is related to the privacy-utility tradeoff. Please see my comment in *Limitations*.
Minor weaknesses:
1. Line 66. I think Table 7 is the same as Table 2 which is in the main text, not the appendix.
2. Line 86. $\log 1/\delta$ -> $\log(1/\delta)$
3. Line 98, equation (2). The inequality holds with many constraints on $\alpha$. See Theorem 11 in Mironov et al. [21]
4. Algorithm 1. The authors wrote: it is equivalent to Opacus' function get_noise_multiplier while Algorithm 2 is also equivalent to the same function. I think there is a typo.
5. Algorithm 1, Line 5. I think this is more like a grid search which always decreases $\sigma_{\mathrm{sample}}$, while a binary search would be better.
6. Line 209. The notation $\stackrel{\text{!}}{=}$ is not defined.
7. Line 767 in Section G.2. $\sum_{L\subset D}$ -> $\sum_{L\subset D'}$
---
I have read the rebuttal which addressed my questions and the weakness concerns.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. In Line 379, could you explain the meaning of 'batch gradient descent'? Is it different from SGD? If yes, maybe it can be replaced with the word 'full-batch gradient descent' or just 'gradient descent'.
2. In Table 8, we can see that for the first two rows, the accuracy decreases for classes 1-9 with a higher privacy budget of class 0 compared with the results with the same privacy budget of class 0. The same phenomenon exists in other rows. Intuitively, the higher budget of other classes should provide more information for the classifier so that the accuracy for each group would increase. Could you provide more insight into why this intuition is incorrect under this setting?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: 1. The discussion of the incentive and consequence of people choosing weak privacy protection is not enough in the main text. I found the content in Section D.2 very interesting since it aligned with my intuition: the more privacy being sacrificed, the better utility could potentially be obtained. Also, the results in Table 8 show that having a group with a higher privacy budget will affect the utility of other groups, which could lead to problems like the Prisoner's dilemma where people in the dataset have to give a higher privacy budget in order to maintain their original utility given the fact that other people are willing to have a higher privacy budget.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and suggestions and provide our answer below:
>**W1: Composition theorems**
We would like to point out that our algorithms have the exact same composition (within every privacy group individually) as the standard DP-SGD and do not require additional theorems on composition. In detail, our contribution is to propose a novel way of individual privacy **assignment**. Our privacy accounting follows the standard accounting also used within DP-SGD with the difference that accounting is performed within a privacy group separately. Feldman and Zrnic [9]’s contribution, in contrast, is to propose a novel individual privacy **accounting** mechanism which then requires a different composition. Their accounting differs from standard DPSGD accounting in the sense that it does not account for worst-case privacy leakage, but that it estimates the ‘actual’ privacy leakage. The composition from standard DP-SGD is, thereby, not directly applicable for them while it is for us.
Finally, we would like to note that in Section 5, we show that our individualized privacy assignment can be integrated into the individualized privacy accounting by [9]. We do so by assigning individual privacy budgets to different data points and then rely on the individual privacy accounting by [9] (using their composition theorem).
>**W2: MIA for two additional settings**
We thank the reviewer for this suggestion. We performed this experiment, trained a model with $\varepsilon=10$ and another one with $\varepsilon=20$ using standard DP-SGD on CIFAR10. Then, we performed a LiRA MIA attack with 512 shadow models (the same as in Figure 8a and 8b in the submission) and report the detailed results in the general response. The resulting AUCs can be summarized as follows:
||All points ($\varepsilon=10$ and $\varepsilon=20$)|Points with $\varepsilon=10$|Points with $\varepsilon=20$|
|-----------------------------|------------------------------------------------------|-------------------------------|------------------------------|
|Sample|0.56|0.537|0.581|
|Scale|0.571|0.552|0.589|
|DP-SGD ($\varepsilon=10$)|-|**0.568**|-|
|DP-SGD ($\varepsilon=20$)|-|-|**0.59**|
Our key observations are:
1. IDP-SGD reduces the privacy risk for the data points with $\varepsilon=10$ (in standard DP-SGD, their AUC=0.568 which is higher than with Sample 0.537 or Scale 0.552).
2. This privacy gain does not come at large expense of points with $\varepsilon=20$ whose privacy risk remains roughly the same (AUC=0.59 in DP-SGD vs Sample 0.581 and Scale 0.589).
> **W3: Objective function, convergence, and final performance**
We would like to point out that already DP-SGD itself (especially with the gradient clipping) does not offer rigorous convergence guarantees except in convex, Lipschitz, or Lipschitz-like regimes. For this reason, it is a common practice in work on DP-SGD to rely on empirical evaluation for the utility, see for example [Differentially private learning needs better features (or much more data)](https://arxiv.org/abs/2011.11660), [Unlocking high-accuracy differentially private image classification through scale](https://arxiv.org/pdf/2204.13650v2.pdf).
Following this common practice, we experimentally evaluate a wide range of privacy distributions and different privacy parameters (e.g., two distributions with eps=1,2,3 in Table 2, a distribution with eps=10,20 in Section 4.2, another distribution with eps=1,2,3 in Table 8 and Figure 5+6, and a distribution with ten different epsilon in range between 0.5 and 6.1 in Table 9).
In the general response, we also provide a new experiment with 100 different privacy groups.
Over all experiments, we increase the utility of DP-SGD and do not observe any convergence issues in our experimental results.
> **Minor comments**
We thank the reviewer for carefully reading our paper. We removed Table 7 and line 66 now refers to Table 2. We also corrected the other points mentioned by the reviewer and explain the notation !=, which refers to “should be equal to” in the updated version of the paper.
> **Q1: Batch gradient descent**
We indeed refer to full-batch gradient descent and adopted the reviewer’s suggestion on using this term within our work.
> **Q2: Utility increase and decrease due to individualized privacy**
We thank the reviewer for their careful study of Table 8 and are happy to provide further insights into the observed phenomenon:
What we observed in our experiments is that the group that uses a higher privacy budget (e.g. group class 0) provides more information to the model and makes the model learn the features of this particular group better (reducing the model’s performance on other groups’ features). We think that this phenomenon is illustrated very interestingly, for example, in Figure 6a). We observe that if class 9 gets the higher privacy budget (last column), the class that suffers the largest utility loss is class 4 (which is the most similar digit to the 9 and suffers from the model paying more attention to the features of the 9). This shows that when the privacy groups differ significantly in their distributions, by increasing the privacy budget of one group, especially this group will benefit from utility improvements.
---
Rebuttal Comment 1.1:
Title: Thank you for the response!
Comment: W1. Now I understand the composition results in this paper.
W2. This table looks good to me.
W3. Yes. I understand the difficulty here. My concern is more about the characterization of the utility loss for groups with lower privacy budgets. If the convergence (e.g., the decrease of train loss) is slower for the groups with lower privacy budget, then DP-SGD may need a longer time to converge which also affects the choice of the number of iterations of SGD. I understand that this may be out of the scope of this paper but still raises my concern.
Based on your explanation for W1, I have increased my score from 5 to 6.
---
Reply to Comment 1.1.1:
Title: Thank you to the reviewer
Comment: We would like to thank the reviewer for their careful read of our response, for engaging in the discussion with us and for increasing their score.
---
Reply to Comment 1.1.2:
Title: Sharing additional Results regarding the Training Dynamics and Loss History of the privacy group with smallest epsilon
Comment: To address the reviewer’s concern regarding the loss history of the group with the smallest privacy budget ($\varepsilon=1$), we ran additional experiments.
We collected the loss values (and accuracy values) over the 4 following setups on the MNIST dataset and privacy setup 2 (54%,37%,9%, $\varepsilon$={1,2,3}):
- loss value for privacy group with $\varepsilon=1$ for the Sample method;
- loss value for privacy group with $\varepsilon=1$ for the Scale method;
- training with standard DP-SGD ($\varepsilon=1$) solely on the 54% of data points that have the privacy budget of $\varepsilon=1$, and
- training with the standard DP-SGD ($\varepsilon=1$) on 100% of the MNIST dataset.
In the following table, we depict the training loss of the privacy group with $\varepsilon=1$ during training:
| **Epochs** | **Sample, $\varepsilon$=1,2,3 (loss of group eps=1)** | **Scale, $\varepsilon$=1,2,3 (loss of group eps=1)** | **DP-SGD 54%, $\varepsilon$=1** | **DP-SGD 100%, $\varepsilon$=1** |
|------------|---------------------------------------------|--------------------------------------------|-----------------------|------------------------|
| 20 | 0.145 | 0.148 | 0.211 | 0.162 |
| 40 | 0.116 | 0.121 | 0.187 | 0.143 |
| 60 | 0.109 | 0.115 | 0.183 | 0.142 |
| 80 | 0.105 | 0.112 | 0.184 | 0.142 |
After training finishes, this leads to the following accuracies for the respective group:
| | **Sample, $\varepsilon$=1,2,3** | **Scale, $\varepsilon$=1,2,3** | **DP-SGD 54%, $\varepsilon$=1** | **DP-SGD 100%, $\varepsilon$=1** |
|-------------------|-----------------------|----------------------|-----------------------|------------------------|
| Test Accuracy (%) | 97.43 | 97.26 | 95.36 | 96.56 |
Our results indicate that the availability of more information from the data of different privacy groups in our Sample and Scale methods enables the model to learn better features. This leads to lower loss for the data points with privacy budget $\varepsilon=1$ than if the model was trained on only those 54% of points. The effect of including more data (i.e., using 100% of the MNIST dataset), all with $\varepsilon=1$, is weaker, indicating that even the data with highest privacy requirement benefits substantially from our individualized privacy in terms of reduced loss and increased accuracy. | Summary: This paper proposes two variants of Differentially Private Stochastic Gradient Descent (DP-SGD) to train machine learning models that satisfy approximate personalized differential privacy (PDP), following the definition of Jorgensen et al. [15]. In contrast to vanilla DP-SGD where all points in the training dataset are assigned a global, uniform privacy budget $\varepsilon$, the authors consider a scenario where points are split into privacy groups $\{\mathcal{G}_1,\ldots,\mathcal{G}_P\}$ and each group $\mathcal{G}_p$ is assigned a different privacy budget $\varepsilon_p$. This allows individuals who contribute training data to specify their privacy preferences, reflecting the variety of privacy attitudes among users observed in past surveys.
To achieve PDP the paper modifies DP-SGD in one of two ways:
1. **Sample** method: adjusting the sampling rate used to construct mini-batches, so that points in groups with a higher privacy budget are sampled more often than points in groups with lower privacy budget (and thus, stringent privacy preference).
2. **Scale** method: adjusting the noise multiplier to scale the variance of noise added in a per-point basis. Rather than doing this directly, which would be inefficient, the method achieves the same effect indirectly by scaling the clipping norm of per-point gradients.
These modifications are such that given target group privacy budgets and a number of training steps, the mini-batch noise multiplier in DP-SGD can be chosen so that the privacy budget for each group is exhausted at the end of training, maximizing utility.
The paper evaluates **Scale** and **Sample** on CNNs trained over MNIST, SVHN and CIFAR-10 using 3 privacy groups with budgets $\varepsilon_p = p$ for $p \in \lbrace1,2,3\rbrace$, distributing points into groups according to proportions used in prior work. This evaluation shows top-1 accuracy improvements ranging between 1-5% over vanilla DP-SGD with a uniform privacy budget $\varepsilon_1 = 1$ (which guarantees PDP, but underutilizes the budget for the other two groups). The methods also compare favorably to Individualized PATE [5], which uses different techniques but achieves the same privacy guarantee.
The paper also demonstrates that the empirical evaluation of membership inference attacks against models trained with PDP using the above two methods shows the expected gap between groups with different privacy budgets. Finally, the authors discuss how to combine individual privacy assignment with individual privacy accounting as done by Feldman and Zrnic [10] for batch gradient descent.
The supplemental material provides implementations of **Scale** and **Sample** in the Opacus library as well as scripts to reproduce the results in the paper.
Strengths: 1. **First DP-SGD variants designed to achieve personalized differential privacy**
There is abundant literature for adapting differentially private mechanisms for general data analysis to achieve personalized differential privacy as well as for computing personalized differential privacy guarantees for training data points in ML models. While PDP accounting alone can be used to improve model utility modestly (see Feldman and Zrnic [10]), as far as I know the only other method that trains models with individual privacy assignments is Individualized PATE [5] which uses significantly different techniques.
2. **Original combination of existing mechanisms**
The **Sample** mechanism is inspired by the "Sampling" mechanism of Jorgensen et al. [16], while **Scale** is inspired by the "stretching" mechanism of Alaggan et al. [2]. Combining these mechanisms with DP-SGD is relatively straightforward, but has not been done before.
3. **Problem is well motivated and solution is contextualized relative to prior work**
The authors discuss related work up front, explain how their solution fits into the existing literature, and justify their contributions.
4. **Complete algorithmic descriptions of proposed methods** The paper describes the adaptations of DP-SGD in sufficient detail, including pseudocode and justifications for design decisions.
5. **Demonstrates utility gains compared to alternative solutions** The empirical evaluation on image classification tasks demonstrates a modest utility gain compared to alternative baselines.
6. **Comprehensive supplemental material** The Appendix includes valuable additional discussions. The accompanying code is a significant extension to an existing library and demonstrates the practicality of the techniques.
Weaknesses: 1. **Limited empirical evaluation** The empirical evaluation is limited to convolutional architectures trained from scratch for image classification. It is unclear how the paper findings hold for other architectures, modalities, tasks, or practically relevant scenarios, such as fine-tuning.
2. **Modest utility gains** The utility gains with respect to vanilla DP-SGD are modest (1-5%). The comparison to alternative baselines with simpler implementations in Appendix E shows even more modest gains. For instance, simply training sequentially on each group separately using DP-SGD achieves accuracies only 0.3-0.41% below the best of **Sample** and **Scale** (see Table 12). It is unclear to what extent these differences are statistically significant, whether comparable effort has been put in hyperparameter tuning, and whether mild modifications to this alternative baseline could outperform **Sample** and **Scale**.
3. **Unsupported claims about the confidentiality of privacy preferences** The authors argue that privacy preferences (i.e., per-group privacy budgets) should be kept private and claim that their modifications of DP-SGD do not leak information about privacy preferences because untrusted parties only interact with the final trained model. This claim is unsupported by proof (compare this to Allagan et al. [2], which prove a similar guarantee explicitly) and is made with respect to a different adversary model than the privacy guarantees of training data in DP-SGD, which consider adversaries that observe noisy gradients released at every training iteration, not only the final model.
## Minor comments
- Given the number of references to numbered lines in Algorithm 3, it would be convenient to include the listing in the body of the paper.
- l.76: "[D]iffer in any one record" is ambiguous. Consider making clear that you use the add/remove neighboring relation. That is, $D,D'$ are adjacent if $D = D' \cup \{x\}$ for some record $x$, or vice versa.
- l.79: "DP bounds privacy leakage for any individual". This is only true under the assumption that an individual contributes at most one record.
- l.86: The original conversion from $(\alpha, \rho)$-RDP to $(\varepsilon, \delta)$-DP of Mironov [21] is suboptimal. Balle et al. [A, Theorem 21] provide a tighter conversion (which is the one Opacus uses).
- Table 1: $\sigma_{\rm SAMPLE}$ should read $\sigma_{\rm sample}$.
- Algorithm 1: Missing inputs: scaling factor $s_i$, target sample rate $q$.
- Algorithm 1: In l.1, you could initialize $\sigma$ using $\textit{getNoise}$ as $\sigma \gets \textit{getNoise}(\varepsilon_1, \delta, q, I)$.
- Algorithm 1: In l.4 $G_p$ should read $\mathcal{G}_p$.
- l.190: "our Sample [method]"
- l.209: While I have seen it being used before, I think that the notation $\stackrel{!}{=}$ is far from being conventional.
- l.215: "[W]e still require the expected average sampling rate to remain $q$ to obtain constant mini-batch size $B$". As you said just before, the mini-batch size isn't constant, $B$ is the **expected** size. Rather, you want the expected mini-batch size in IDP-SGD to be the same as in vanilla DP-SGD with sampling rate $q$.
- l.225: There's an extra closing parenthesis.
- l.232: DPSGD should read DP-SGD.
- l.260: Use \{$\sigma_1,\ldots,\sigma_P$\} rather than $\sigma_p$-s.
- l.313 (and subsequently): "Lira" should read "LiRA"
- Numbers in bibliographic references are shifted between the paper and the full paper (with Appendix) provided as supplemental material.
- Appendix, Algorithm 3: In line 11, $\theta_T$ should be $\theta_I$.
- l.566: $\sigma_{\rm in}$, $\sigma_{\rm out}$ should read $\sigma^2_{\rm in}$, $\sigma^2_{\rm out}$, respectively.
- l.576: "Line 3 in Algorithm 3" should be "Line 8 in Algorithm 3"
- l.591-596: $G_p$ should read $\mathcal{G}_p$.
- l.691: "Appendix E.3" should read "Table 12"
- Table 10: How can it be that you get a statistically significant negative $\Delta$ for one of the models? That would reverse the relation between the two privacy groups.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. **Confidentiality of privacy budgets**
The paper argues that IDP-SGD preserves the confidentiality of group privacy preferences, at least against parties that only interact with the final trained model. However, intuitively, models trained with a uniform privacy budget of $\varepsilon=1$ and models trained on the same data but
increasing the privacy budget for a single group from $\varepsilon=0.1$ to $\varepsilon=100$ will likely perform noticeably different on that group. Can you provide a proof that IDP-SGD satisfies some form of confidentiality for privacy preferences that makes explicit the adversary model and assumptions?
2. **LiRA AUC scores**
Carlini et al. [6] report LiRA results on models trained with DP-SGD and $\epsilon = 8$ (not far from one your choices of $\epsilon = 10$), with a significantly lower AUC score of 0.503 compared to the score of 0.537 that you report. Even for $\varepsilon > 5000$, their best result has a lower AUC score of 0.527. Could you explain this difference?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately address societal impact but only some limitations.
A paragraph at the end of the introduction discusses the need for openly communicating privacy risks and regulatory supervision. This points to Appendix A, which discusses broader impacts in the light of prior studies and some limitations: evaluation on standard benchmarks with simulated privacy preference distributions; practical impact measured via membership inference attacks only. I think that the limited nature of the evaluation on CNN models trained from scratch on visual tasks should also be brought up front, as the results might differ in other settings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First of all, we would like to express our gratitude to the reviewer for their thorough review and detailed feedback that goes beyond any expectation.
> **Limited empirical evaluation**
We thank the reviewer for their suggestion to extend our experimental evaluation on other architectures, tasks, modalities, and fine-tuning. We ran additional experiments with IDP-SGD for:
- fine-tuning a BERT transformer on the SNLI dataset for natural language inference,
- training a ResNet 18 with IDP-SGD from scratch on the CIFAR10 dataset for image classification,
- training a linear embedding model on the IMDB dataset for text classification,
- and training a character-level RNN from scratch to classify surnames to languages.
The results are included in the general response.
Over all the considered experiments, IDP-SGD outperforms standard DP-SGD which underlines that our findings on the performance improvements of IDP-SGD generalize over different setups.
> **Modest utility gains**
The main goal of our work is to enable individually private training with different epsilon values. We compare the possible options and present the best performing method in the main body of the work. We performed thorough hyperparameter tuning over all our methods, including E.1 and E.2. Note that the hyperparameter tuning for E.1 and E.2 goes beyond standard parameters. For E.1, one also needs to tune over which group(s) to exclude, and for E.2 in what order to train on the different groups. We performed all these analyses and reported the best results.
From a conceptual viewpoint, Scale and Sample outperform the other baselines when the data distribution between the different privacy groups differs significantly. E.1 which completely excludes whole groups from training will not be able to learn the high-privacy groups at all, while E.2 which trains privacy groups sequentially will suffer from catastrophic forgetting where the final model will perform worse on the groups first seen during training. In contrast, in Sample and Scale, all privacy groups contribute to the training over the entire training process.
Finally, we would like to note that in DP-SGD research, a utility improvement of 1-2% is a significant contribution, especially in the low-privacy regime (eps=1,2,3) that we are operating in. See for example, the [[Unlocking high-accuracy differentially private image classification through scale]](https://arxiv.org/pdf/2204.13650v2.pdf)-paper that outperforms prior work by less than 2% at epsilon=3. Looking into Fig 1a) of the paper, we see that the improvement of the other prior baselines over each other is usually in the range of 1-2%.
> **Minor comments**
We made space for Algorithm 3 in the main paper by merging Figure 3a) and 3b), clarified the DP definition, and fixed all the formatting suggestions pointed out by the reviewer. Additionally, we replaced the conversion from RDP to (eps,delta)-DP by the one proposed in Balle et al. Finally, we adapted Algorithm 1.
Regarding the negative delta in Table 10, we are sorry for the confusion. The delta (effect size) in the table should be positive 4.03. We corrected the typo.
> **Question 1: Confidentiality of privacy budgets**
We wrote a thorough explanation on the confidentiality of privacy budgets in our methods in the general response.
Regarding confidentiality of privacy values in [2], we would like to mention that their application of HDP is within a distributed gossip-based semantic clustering protocol that operates in an iterative manner. In the protocol, pairs of users calculate the cosine similarity between their preferences (see page 11). The Stretching mechanism used for this relies on both users’ privacy preference vectors as parameters. This means, the users get to directly interact with each other’s privacy preference vectors–requiring privacy protection for these vectors. In contrast, in our setup, all privacy-preference based operations are already performed when the end user (potential adversary) gets to interact with the model where determination of privacy budgets is difficult and highly impractical (see argument 1. and 2. in the general response).
When dealing with different privacy groups, [2] implements a form of deniability for users’ membership in a certain privacy group by uniformly sampling the privacy values over groups in an overlapping way: the unconcerned sample from {1.}, the pragmatists from {0.5, 0.75, 1.}, and the fundamentalists from {0, 0.5, 1}, such that on average, the values will be 1., 0.75, and 0.5 within the groups (0 denotes the strongest, while 1 is the weakest privacy guarantee). Such an approach can be implemented into our IDP-SGD framework as well. The drawback is that an individual who expresses strong privacy preferences (fundamentalists) might actually be assigned a privacy value 1 by randomness, which causes the same privacy leakage as for an unconcerned individual. Depending on the individuals’ preferences and the application, when protecting privacy group membership is more important than protecting the actual data, this approach might still be valid.
We added a discussion about adopting this approach into our paper as Section 3.5.
> **Question 2: LiRA Scores**
We thank the reviewer for this insightful detailed question. The differences in our vs. their setup that can cause our higher AUC score are:
1. We set eps=10 for half of the data and eps=20 to the other half, which is significantly higher than their eps=8.
2. The accuracy of our models differs from theirs. Our target model achieves a train accuracy of 68.46% on its 25,000 member data points, and a test accuracy of 64.89% on its 25,000 non-member data points. They report 61.3% test accuracy (we could not find their train accuracy in the appendix). Differences in accuracy, especially differences in the train-test gap can cause differences in the MIA success rate.
3. Their attack on CIFAR10 uses 256 shadow models while we use 512. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their feedback which has greatly helped us improve the paper. We are glad that the reviewers recognize our work to address an important problem by proposing the first personalization mechanisms for DP-SGD (3QXJ) which are novel, easy to implement, and provide theoretical privacy guarantees (7px6, Sruy, r49G). Our methods ‘yield a modest utility over alternative baselines’ (3QXJ), (Sruy,r49G), and the ‘experiments in this paper are adequate for the evaluation’ with additional valuable results and discussions in the appendix (7px6). The reviewers appreciate the writing and the organization of the paper (7px6, r49G) and we hope that this work can contribute to making machine learning with individualized privacy guarantees practical. Below we offer clarifications to some common questions.
>**1. Extending experiments**
**Other (larger) architectures, modalities, tasks, and fine-tuning:**
Following reviewers 3QXJ and r49G, we evaluated IDP-SGD in a broader scope and
fine-tuned a BERT transformer (bert-base-uncased) on the SNLI dataset for natural language inference,
trained a ResNet 18 from scratch on the CIFAR10 dataset for image classification,
trained a linear embedding model *(one embedding and two linear layers)* on IMDB data for text classification,
and trained a character-level RNN *(DPLSTM, three layers, hidden size: 128)* from scratch to classify surnames to languages.
|Model|Dataset|DP-SGD (eps=5)|Sample (eps=5,7.5,10)|Scale (eps=5,7.5,10)|
|-----------|----------|----------------|--------------------|-------------------|
|BERT|SNLI|75.91+-0.23|76.11+-0.21|76.5+-0.17|
|ResNet18|CIFAR10|47.52+-0.84|48.52+-0.69|48.77+-0.73|
|Embedding|IMDB|72.69+-0.27|73.27+-0.3|73.34+-0.11|
|||**(eps=1)**|**(eps=1,2,3)**| **(eps=1,2,3)**|
|RNN|Surnames|60.86+-0.78|65.56+-0.96|66.0+-1.19|
Over all setups, our IDP-SGD outperforms standard DP-SGD → see also attached PDF, Table 1.
**Runtimes:**
Inspired by reviewer Sruy, we analyzed runtimes of our methods. E.g., for the first privacy setting of Table 2, and the CIFAR10 dataset with the CNN architecture, we observe that Sample and Scale do not significantly increase runtime over DP-SGD with 406.33 sec for Sample, 414.33 for Scale vs. 403.67 for DP-SGD → see PDF Table 3.
Regarding the time for computing the privacy parameters which takes place *exactly once* before the training with a new privacy setup, we included a detailed Table 4 in the PDF. Computation time depends on a precision parameter and the number of groups: For example, with 8 groups and precision=0.0001 (used in this work), Sample requires 20 sec. and Scale 5 sec. to determine the parameters.
**More privacy groups:**
Following reviewer Sruy, we implemented IDP-SGD training on MNIST with 100 evenly sized privacy groups with budgets [1, 1.05, 1.1, ..., 5.9, 5.95]. Our methods achieved the following accuracy: Sample: 98.17%, Scale: 98.21% vs. the standard DP-SGD baseline with epsilon=1: 96.75%. → see PDF Table 2.
**Compare MIA with standard DP-SGD**
Given the suggestion by reviewer 7px6, we trained models with $\varepsilon=10$ and $\varepsilon=20$ with standard DP-SGD and compared the results to our MIA results with IDP-SGD. Our key observations are:
1. IDP-SGD reduces the privacy risk for the data points with $\varepsilon=10$ (standard DP-SGD: AUC=0.568; Sample: 0.537; Scale: 0.552).
2. This privacy gain does not come at large expense of points with $\varepsilon=20$ whose privacy risk remains roughly the same (AUC=0.59 vs Sample: 0.581 and Scale: 0.589).
→ see PDF Figure 1.
> **2. Confidentiality of privacy preferences**
The confidentiality of our privacy budgets results from the following observations:
1. Even for standard DP training, it was shown that one cannot perform sample-efficient black-box audits to determine the privacy-budget of a trained model [[Property Testing for Differential Privacy]](https://doi.org/10.48550/arXiv.1806.06427). Existing frameworks that perform auditing, e.g. [[Debugging Differential Privacy: A Case Study for Privacy Auditing]](https://arxiv.org/pdf/2202.12219.pdf) rely on training 1000-100,000 shadow models to get an estimate of the full model’s privacy guarantee.
In our IDP-SGD setup, the model does not have one single $\varepsilon$, but one $\varepsilon$ per group, aggravating the impracticability of auditing guarantees and limiting the applicability of existing frameworks. As a result, the adversary cannot determine the different per-group privacy budgets of the model.
2. Our MIA experiments in Section 4.2 show a difference in privacy risks over *entire privacy groups*. First note that these experiments (Figure 2) were the most costly ones in our paper. We had to train more than 1500 (3 times 512) shadow models to generate the results. Yet, there are two limitations: a) The results only show differences in the groups, but do not reveal the respective epsilon values, b) they do not allow to perform per-point distinctions. This is because the distributions of privacy risks over the two groups still have a large overlap. Hence, given a single point’s risks and the two distributions, one cannot predict without error to what group the point belongs. We acknowledge that the error could be reduced when the privacy budgets between the groups differ significantly (0.1 vs. 100). Note however, that we suggest deploying the model in a setup where individuals simply specify their privacy preferences and the model builder or an ethics committee assign concrete epsilon values to these preferences. We observed that extreme differences between privacy groups’ $\varepsilon$s are less beneficial because they degrade overall model performance. Hence, in practice, these large differences are very unlikely. As a result, it will be very challenging for an adversary to determine which privacy group a data point belongs to. Even if they manage to, they will not be able to learn the privacy budget of the group (see 1.).
Pdf: /pdf/af213782c1299feb3edc3ca41cb96df0a00ddbb7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
On the Size and Approximation Error of Distilled Datasets | Accept (poster) | Summary: This paper provides theoretical analysis towards recent errors of recent dataset distillation methods based on kernel ridge regression (KRR). It mainly utilizes some previous results on KRR and applies them in the context of dataset distillation. Some simple simulation results verify the derived bounds.
Strengths: 1. The paper derives valid error bounds for KRR-based dataset distillation methods under some assumptions. It provides a guidance for the size of synthetic dataset that is required to result in pleasant errors.
2. The writing is logical and consistent. The contributions of this paper and how this paper inherits previous results is very clear.
Weaknesses: 1. My major concern is that the analysis is not consistent with the typical setting of dataset distillation:
* This paper uses a distilled dataset whose size is greater than the number of features encoded by the kernel. However, in practice, the KRR-based methods in dataset distillation often rely on infinite-wide neural networks and in practice this is achieved by a large feature dimension, while the size of distilled dataset is in fact very small, e.g., only 1 image per class.
* In fact, the setting that the number of distilled samples is larger than the feature dimension makes the problem much simpler according my own experience. For example, we can directly solve the label $y$ without touching the samples $X$ as shown in Proof of (i) to get exactly the same KRR solution. Intuitively, if KRR solutions of real and synthetic datasets are consistent, predictions on data should also be consistent, which indicates that the results in the paper are somewhat trivial.
* Moreover, this paper assumes that the output dimension is only 1. However, the output dimension is in fact the number of classes for typical classification problems. It is unclear whether the error bounds derived by the current paper still useful in real cases.
2. The technical contribution is not enough for a NeurIPS paper. The paper directly applies results of previous works. The results in the current paper can be viewed as an implication of previous results on the dataset distillation setting, since the setting is simple as mentioned previously.
3. The error bounds are not tight as shown in the experiment section, which means the theoretical results are not very useful in practice.
4. The experiments are too simple. The largest scale is only binary classification on MNIST dataset. For dataset distillation, at least results on CIFAR10 are expected as very standard benchmarks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The analysis for cases that the number of synthetic samples is smaller than the feature dimension.
2. The analysis for cases that the output dimension is larger than 1.
3. Experimental analysis on larger datasets.
Please refer to the weaknesses part for details.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I have not found issues related to this part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the insights shared by the reviewer and the expert evaluation they provided. Integrating their feedback, have already made significant improvements to the paper. We are looking forward to further engagement with the reviewer as we enter the upcoming open discussion phase.
We have taken careful consideration of every comment raised in the review, ensuring a thorough response to each. We are optimistic that our comprehensive explanations may encourage the reviewer to consider raising the score. If any further clarification is required, please do not hesitate to reach out to us.
*Comment 1.1:* The size of the distilled set is not related to the “feature space” dimension correlated with the NTK of infinite-wide networks or the dimension of the input. It is proportional to the dimension that is guaranteed by the “weighted RFF” of [1]. Thus, it is not necessarily large and depends mostly on the eigenvalues’ decay rate of the Gram matrix $K$ that corresponds to the kernel. For example (for shift-invariant kernels), if the decay rate is exponential, the size of the distilled set is polylogarithmic in $n$ or even $O(1)$ when $K$ has finite rank.
*Comment 1.2:* We do not ensure that the KRR solution for $(\mathbf{S},y_{\mathbf{S}})$ is exactly the same KRR solution on the full data. In fact, the distilled set KRR solution is formulated via instances of the distilled set where the corresponding in-sample prediction function lies in a ball of radius $2\lambda$ from the in-sample prediction function of the solution $\beta$ to the Ridge regression (RR) model optimized for the RFF image of the full data, i,e., in our derivations, we aimed to find another in-sample prediction function in the KRR space involving $S$ such that it lies within a bounded space that would also encapsulate the optimal KRR found on the full data.
Finally, the KRR objective function involves a weight vector $\alpha$, which is used to define the in-sample prediction of any $x\in X$ as a linear combination between each instance in the data ($S$). Thus the solution of the KRR on the entire data is not the same solution that we obtain due to the fact that these two solutions lie in two different spaces (different dimensionalities), and thus the in-sample prediction function is not guaranteed to be the same.
*Comment 1.3:* The theory behind our approach can be used for the case of multiclass classification as kindly described in [1]. For multi-output data, $\beta$ will be a matrix where our theory will hold for every column in $\beta$ with the corresponding column of the labels matrix. We have expanded on this in our paper.
*Comment 2:* We thank the reviewer for the honesty. However, we respectfully disagree with those claims. This result is indeed **not** an implication of previous results. While we did rely on tools from [1], the novelty of our paper lies in carefully analyzing the implied meaning of such tools and exploiting their underlying structure to our benefit.
Specifically, we showed how to design the labels of the distilled set $S$ such that certain properties will hold, i.e., we showed that we can construct a label function for $S$ where we applied variable equation reformulation to enable solving a system of equations that would (a) yield that the label function maintains a direct connection to the solution of the RR problem involving the RFF image of the input data $X$, and (b) hinge upon the minimal size of $S$ to ensure the existence of such a label vector is dependent on the dimension of the RFF space that was investigated by [1].
From there, we further ensure the existence of a KRR solution by showing that there exists a different in-sample prediction function that is close to the solution of the aforementioned solution to the RR problem involving the RFF space of $X$ in terms of the quality of the prediction. Such existence is indeed not trivial and requires careful analysis to ensure the existence of such a trait, e.g., (a) ensuring that the minimal distilled set needs to be larger than the dimension of the RFF space to enable the existence of a label vector corresponding to a distilled set that can be any set of instances (b) maneuvering through the properties that this label vector hinges upon exploiting the underlying structure associated with such vector to ensure the existence of an in-sample prediction function that resides in the same space that the optimal in-sample prediction function which is defined by the solution of the KRR model on $X$. At first glance, this is not an immediate pass, and it did require careful derivation and ensuring certain properties hold.
Our analysis did not stop here but rather we also bounded the added error of using our distilled set leading to a different bound to that of [1]. Note that in our context, no prior paper suggested any provable guarantee on the size of the distilled set or its approximation error; our paper is the first.
*Comment 3:* Initially, our proofs are crucial for advancing the field in practice. By setting bounds on size and approximation, we offer a way to analyze, validate, and debug the implementation and correctness of new algorithms when used with unexplored datasets.
Exploring the underlying theory supporting small distilled sets, substantiated by rigorous proofs, is a crucial milestone. This marks an initial step towards creating provably robust dataset distillation techniques, an aspect lacking in the existing literature. This is by using the foundational understanding of dataset distillation started in this work. Hence, our work can be seen as the primary stride towards such a goal. While more investigation is necessary for a comprehensive grasp of data distillation, this work serves as the first step in introducing theoretical guarantees to dataset distillation.
*Comment 4:* following your valuable comment, we added experiments on Cifar10 (attached in PDF). Results on SVHN will be added.
[1] = [LTOS21]
---
Rebuttal Comment 1.1:
Title: Thanks for the Rebuttal
Comment: I would like to thank the authors for the detailed response to the concerns and questions. Frankly speaking, I cannot easily follow every single line of theoretical derivation and the authors' rebuttal. What I discuss here is based on my understanding. If it is wrong please correct me.
*Comment 1.1:* I have understood that the size of the distilled set is not related to the “feature space” dimension. However, I am very curious about how to **intuitively** understand the feature space dimension I mentioned in the comments, the eigenvalues’ decay rate of the Gram matrix mentioned by the authors, and their relationships. For example, in what cases the decay rate is exponential? Do the input datasets have to be equipped with some specific patterns?
*Comment 1.2:* Looking at (1) of Theorem 3 again, I assume that $\lambda$ should be small and not make a large difference in the numerical results. If that is the case and we want to find the optimal label $y_S$, since the number of samples ($s_{\phi}+1$) is larger than the RFF features ($s_[\phi]$), the equation is underdetermined and we can definitely find proper $y_S$ to guarantee the same KRR solution. Even though it is not exactly the same due to the effect of $\lambda$ for regularization, the error should be small enough. If the error on KRR solutions is not large, we can expect that the error on predictions is not large, either. So I am not sure why the following large paragraph of theoretical analysis is necessary.
*Comment 1.3:* I have understood that the theoretical analysis is also applicable to multi-class cases. However, I wonder if it is possible for the authors to validate this through experiments, to better see the scalability of the proposed theoretical tool. Please do not worry if the authors think the time is tight and I will definitely understand it. I just want to express my curiosity here.
*Comment 2:* I have understood that the proposed theoretical analysis is not a trivial implication of previous results.
*Comment 3:* I definitely understand that the paper is the first that focuses on theoretical analysis of the error bounds in dataset distillation. But indeed the gap is too large, especially for the case of small synthetic datasets. If the authors can provide some insightful explanation on this and discuss some possible future solutions, I think it is acceptable given that the paper is the first work on this direction.
*Comment 4:* Thanks for the new results!
Overall, I choose increase my score to 4 given that the authors addressed part of my concerns.
---
Reply to Comment 1.1.1:
Title: Thank you + additional answers
Comment: We thank the reviewer for the quick response and for engaging with us. Indeed we appreciate that you raised your score. It is great to see how the open review process is beneficial this way.
**Comment 1.1:**
We thank the reviewer for pointing this out. Here are some deeper details – in order to quantify $s_\phi$ which in turn determines the size of the distilled set, we need to measure $d_K$ which is the trace of $\mathbf{K} \left( \mathbf{K} + n\lambda I_n\right)^{-1}$:
1. For the case where $K$ has a finite rank, i.e., the number of positive eigenvalues is lower than $n$, then $s_\phi \in \Omega(1)$.
2. As for the exponential decay, it occurs when the kernel is Gaussian and the marginal distribution of the input data (e.g., images) is sub-Gaussian. In such a case, it was shown in [1] that $d_K \in O\left(\log{\frac{1}{\lambda}} \right)$. Thus $s_\phi$ is poly-logarithmic in $n$ when $\lambda := O\left( \frac{1}{\sqrt{n}}\right)$.
3. For the case where the Hilbert space $\mathcal{H}$ is also a Sobolov space of order $\gamma$ larger or equal to 1, then $d_K \in O\left(\frac{1}{\lambda^{2\gamma}}\right)$ which in the case of $\lambda := O\left( \frac{1}{\sqrt{n}}\right)$, we have $s_\phi \in \Omega\left(n^{\frac{1}{4\gamma}}\right)$.
4. In the general case, where the decay of the eigenvalues admits $\lambda_i \propto O(i^{-1})$, then $d_k$ in the worst case is bounded by $O(\frac{1}{\lambda}$, and thus, via Theorem 2, we can deduce that $s_\phi \in O(\sqrt{n}\log{n})$ for the case of $\lambda \in O(\frac{1}{\sqrt{n}}).$
Note that the choice of $\lambda \in O(\frac{1}{\sqrt{n}})$ is used in the literature of KRR to ensure that the learning rate is $\frac{1}{\sqrt{n}}$ as shown in [2].
[1] Bach, F. (2017). On the equivalence between kernel quadrature rules and random feature expansions. The Journal of Machine Learning Research, 18(1), 714-751.
[2] Rudi, A., & Rosasco, L. (2017). Generalization properties of learning with random features. Advances in neural information processing systems, 30.
**Comment 1.2:**
We have chosen that the size of the distilled set is equal to $s_\phi + 1$ to ensure the existence of $y_S$ with the properties we have sought to maintain. However, this step alone is not enough to ensure that our distilled set admits the provable guarantees stated by Theorem 3. We thus had to also engineer an in-sample prediction function that resides in the same space as the optimal in-sample prediction function which is defined by the solution of the KRR model on the entire data $\mathbf{X}.$
Note that we don’t ensure the same KRR but rather the same solution for the Ridge regression problem which does not theoretically guarantee the same KRR solution i.e., in sample prediction involving $(S,y_s)$ for $(X,y)$ would be equal to the optimal KRR solution conjured for the $(X,y).$
Thus the main goal of the other paragraphs is not to show the approximations in the RFF space but to prove the existence of a corresponding set in the original space, with a similar KRR solution in terms of approximation on the whole data P (not ridge regression in the RFF space).
**Comment 1.3:**
We thank the reviewer for his keen interest in our paper. We are working now on running an experiment for the multi-class case before the end of the discussion deadline.
**Comment 2:**
We appreciate your comment. Indeed, the explanations we provided here were added to the paper itself to improve its clarity. Thus, we greatly appreciate your raised comments and responses. | Summary: This manuscirpt first give theoretical understanding on synthetic dataset generated in dataset distillation task. In concrete, the authors prove (1) the existance of distilled datasets and (2) the generalization error is related to the "number of effective degrees of freedom" in the random Fourier features (RFF) regime. The theoretical bounds are further verified by simple experiments.
Strengths: 1. First theoretical work on the field of dataset distillation and the theoretical results are well established;
2. Theoretically show the correlation between the size of distilled dataset and the characteristics of kernels in RFF regime;
3. Theoretically show the generalization bound w.r.t. distilled datasets in kernel ridge regression (KRR) regime.
Weaknesses: 1. The theoretical results are biult on KRR, which has a gap to the finite-width network architectures used in dataset distillation.
2. Can the derived generalization bounds provide insights on developing novel dataset distillation algorithms. For example, imporving the kernel architecture to decrease the right-handed term in the generalization bound to reduce the risk trained on synthetic datasets. In this way, the bound can be seen to be tight and impractical.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weekness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See weekness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We extend our appreciation to the reviewer for their expert evaluation, insightful remarks, positive feedback, and valuable suggestions that have contributed to the enhancement of our manuscript.
We now delve into a comprehensive discussion of the concerns raised by the reviewer. We trust that our responses provide a thorough resolution to all your queries, and we look forward to the possibility of you revising your evaluation positively. Should there be any lingering points of concern, we welcome the opportunity to address them to your satisfaction.
**Response to Comment 1.**
We thank the reviewer for pointing this out. Many theoretical papers in the field of deep learning focus on infinite-width neural networks which are easier to analyze and interpret [1,2,3], as well as connecting such networks to other famous models in the field of deep learning [4,5]. In addition, such networks are widely used for the case of practical datasets distillation [6,7]. This phenomenon arises because KRR-based distillation techniques exhibit strong theoretical compatibility with neural networks of infinite width. In this context, the training process of the neural network aligns with kernel regression principles, as elegantly demonstrated by [1].
To this end, we started by analyzing infinite-width architectures that satisfy such helpful theoretical attributes. We believe it is the first step towards a better understanding of dataset distillation, which will allow us to provide better provable distillation algorithms and interpret the theory behind them. We hope that our work will provide the first theoretical stepping stone towards analyzing and better understanding the magic behind dataset distillation techniques (specifically KRR-based).
[1] Jacot, A., Gabriel, F., & Hongler, C. (2018). Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems, 31.
[2] Arora, S., Du, S. S., Hu, W., Li, Z., Salakhutdinov, R. R., & Wang, R. (2019). On exact computation with an infinitely wide neural net. Advances in neural information processing systems, 32.
[3] Yang, G., & Hu, E. J. (2020). Feature learning in infinite-width neural networks.
[4] Lee, J., Bahri, Y., Novak, R., Schoenholz, S. S., Pennington, J., & Sohl-Dickstein, J. (2017). Deep neural networks as gaussian processes.
[5] Sohl-Dickstein, J., Novak, R., Schoenholz, S. S., & Lee, J. (2020). On the infinite width limit of neural networks with a standard parameterization.
[6] Nguyen, T., Novak, R., Xiao, L., & Lee, J. (2021). Dataset distillation with infinitely wide convolutional networks. Advances in Neural Information Processing Systems
[7] Loo, N., Hasani, R., Amini, A., & Rus, D. (2022). Efficient dataset distillation using random feature approximation. Advances in Neural Information Processing Systems
**Response to Comment 2.**
This is indeed important, and we thank the reviewer for pointing this out. For space limit purposes, we kindly refer the reviewer to our answer to Question 2 of Reviewer myfC. For your convenience: https://openreview.net/forum?id=XWYv4BNShP¬eId=q8bbHghfO8. | Summary: The paper attempts to provide the first theoretical guarantees on the existence of dataset distillation, under the setup of kernel ridge regression. The proof techniques are mainly based on theory of random fourier features. They also provide experiments which are indicated to support their theoretical results.
Strengths: 1. The paper is the first attempt to theoretically guarantees the existence of dataset distillation, which is an important topic for efficient learning.
2. Experimental results are seemingly consistent with the theoretical results, which could strength their claim if the relationship between the theoretical results and the experimental protocol was clear.
Weaknesses: 1. The paper seems not to pay attention to readability of its theoretical results. E.g., Theorem 2 is explained by just 2 lines (l.130-131) without any proof; the notations and statements in Theorem 2 and Theorem 3 are not sophisticated.
2. The proof of Theorem 2 is not provided even in the supplementary materials, but just stated as "A result of the proof of Theorem 1 and Corollary 2 of [ LTOS21]".
3. I cannot find the proof of the existence of the distilled dataset S in the proof of Theorem 3, which is the main claim in this paper.
4. The relationship between the theoretical construction (which I cannot find) and the distillation method used in experiments is unclear. So I'm not confident in whether the experiments really support their theoretical results.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1. Where is the existence of S proved?
2. Can you give the intuition or short strategy of the proof before Theorem3? For example, why can the number of the distilled data S be $s_\phi + 1$? How can we construct it explicitly?
3. Where did you explain what distillation algorithm is used in experiments? How does the algorithm relate to the construction in the theoretical results?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: Limitation is discussed in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We wish to extend our heartfelt appreciation to the esteemed reviewer for their dedicated commitment to meticulously evaluating our paper. The thoughtful points and careful reading hold a pivotal role in the refinement of our work. We have diligently addressed each of these valuable concerns, and we remain enthusiastic about engaging in further dialogues with the reviewer to ensure the complete resolution of any lingering matters.
Summing up, we hold the belief that our comprehensive response will ideally have a positive effect on your assessment, potentially boosting your score.
**Response to Comment 1:**
Thank you for raising this comment. This is indeed important. We kindly refer the reviewer to the section “Clarity of our theoretical results” in the general comments that provides most of the added details to our manuscript -- explaining theorems 2 and 3
For your convenience: https://openreview.net/forum?id=XWYv4BNShP¬eId=RA4SAIBzsG
**Response to Comment 2:**
We thank the reviewer for the careful reading and professional review. The proof of Theorem 2 can be done by following some derivations from the proofs of Theorem 1 and Corollary 2 from [LTOS21]. Following your insightful comment, and for better readability and completeness, we have restated these derivations in the appendix of the paper, while pointing out the original derivations from [LTOS21].
**Response to Comment 3:**
We thank the reviewer for raising this concern. Let $X$ be the input data, $S$ be the desired set we wish to prove its existence and denote by $\tilde{\mathbf{S}}$ and $\tilde{\mathbf{X}}$ the matrix corresponding to the instances of the image of instances of $S$ in the space of RFF, and the matrix corresponding to the instances of the image of instances in $X$ in the RFF space, respectively.
First, note that $S$ can be any set of points as long as the labels $y_S$ satisfy that:
* The solution of the ridge regression on $\left( \tilde{\mathbf{S}}, y_S \right)$ is equivalent to the solution of the ridge regression on $\left(\tilde{\mathbf{X}}, y \right)$ (the image of input data $X$ in the space of RFF); see lines 151-155 in our manuscript.
In other words, we aim to ensure that the optimal solution in the context of Ridge regression on the RFF image of $X$ and its corresponding labels $y$ is identical to the optimal solution in the context of Ridge regression on the RFF image of $S$ and its corresponding labels $y_{S}$; this was done at lines 151–155.
The motivation behind such a goal lies in the core of Lemma 4 which indicates that:
* For every KRR in-sample prediction function $f$ with respect to $X$ defined by the instances of $S$ (referred to $f_{\mathbf{S}}\left[ \mathbf{X} \right]$ in our context), there exists a Ridge regression solution $\beta$ obtained from training a Ridge regression model on $\left( \tilde{S}, y_S \right)$ that admits an additive approximate to the MSE of $f$ and $\mathbf{X}\beta$; see the summation term in the inequality at Lemma 4.
With this in mind, we aimed at constructing such an in-sample prediction function using a Ridge regression solution. To that end, to ensure proper usage of Lemma 4, we have shown that through equation reformulation (involving $\beta$ which is the solution of the aforementioned Ridge regression) and a system of equation solving, there exists a KRR in-sample prediction with respect to the input data involving the distilled set $S$, which satisfies Lemma 4 with respect to $\beta$. With this in mind and the weak triangle inequality (Lemma 5), we derive (i) and (ii) of Theorem 3. This concludes the existence of $S$ that depends on a certain structure that the labels of the distilled set need to admit.
Thus, in summary, by showing that (i) $S$ can be any set with (ii) its labels satisfying a concrete structural property, and the specific (iii) derivation of the KRR solution on $S$, we proved Theorem 3.
All of these details were added to the appendix and a part of them in the relevant places of the proof of Theorem 3 following your fruitful comment.
**Response to Comment 4:**
We apologize for missing this. In our experiments, we were directly minimizing the left-hand side of the equation in line 205 which directly corresponds to the KIP [1] loss which uses KRR for distillation. This method directly satisfies our theoretical assumptions and guarantees.
[1] Nguyen, T., Novak, R., Xiao, L., & Lee, J. (2021). Dataset distillation with infinitely wide convolutional networks. Advances in Neural Information Processing Systems, 34, 5186-5198.
------
**Answer to Question 1:**
Please see the answer to Comment 3.
For instance, see also Lines 151-155 for the construction of $y_S$ given $S$, and the derivation of $f_{\mathbf{S}}[X]$ at lines 169-187.
**Answer to Question 2:**
Sure and thank you for pointing this out. We wrote a detailed explanation of the proof of Theorem 3 in the answer to Comment 1. Also, see the response to Comment 3 regarding the construction of S.
Regarding the use of $s_{\phi} + 1$: This is done to ensure that the equation system that led to the derivation of $y_S$ has infinite solutions which leads to the freedom of choosing any $S$ to be a distilled set as long as $y_S$ maintains a connection to the Ridge regression solution in the RFF space as elaborately explained in Comment 3.
**Answer to Question 3:**
We apologize for missing this. In our experiments, we were directly minimizing the left-hand side of the equation in line 205 which directly corresponds to the KIP [1] loss which uses KRR for distillation. This method directly satisfies our theoretical assumptions and guarantees.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: > Thus, in summary, by showing that (i) $S$ can be any set with (ii) its labels satisfying a concrete structural property, and the specific (iii) derivation of the KRR solution on $S$, we proved Theorem 3.
My question remains unsolved: how you constructed or proved the existence of $S$ satisfying these requirements? Could you clarify this? I'm not sure, so sorry if I missed it.
> We apologize for missing this. In our experiments, we were directly minimizing the left-hand side of the equation in line 205 which directly corresponds to the KIP [1] loss which uses KRR for distillation. This method directly satisfies our theoretical assumptions and guarantees.
Did you explain this somewhere in the main paper? Sorry if I missed it.
---
Reply to Comment 1.1.1:
Title: Thank you + additional clarifications
Comment: We express our gratitude to the reviewer for their prompt feedback and active participation in the discussion. We believe this is highly beneficial for improving our paper and its clarity.
____
**Regarding the existence of $S$**
We apologize if the previous answer was not clear. To make sure we cover all the relevant details, we first provide a set of steps to generate the distilled set $(S, y_S)$, then, we explain “why” following these steps guarantees that the required characteristics hold on the generated set $(S, y_S$). Here are the steps to generate $S$:
1. $S \gets $ Sample a set of $s_\phi + 1$ instances from the input space uniformly at random.
2. Let $\tilde{S}$ be the RFF image of $S$ and let $\tilde{X}$ be the RFF image of the input data $X$.
3. Let $y_S$ be the labels of $S$ defined as a solution to the following equality (such a solution indeed exists, since we have $s_\phi + 1$ variables, and $s_\phi$ equations):
$$\left( \tilde{S}^T \tilde{S} + \lambda n s_\phi \lambda I_{s_\phi}\right) \left( \tilde{X}^T \tilde{X} + \lambda n s_\phi \lambda I_{s_\phi}\right)^{-1}y = \tilde{S}^Ty_S$$
4. Let $\beta$ be be the solution of the Ridge regression problem involving $(S, y_S)$:
$$ \beta \gets \left( \tilde{S}^T \tilde{S} + \lambda n s_\phi \lambda I_{s_\phi}\right)^{-1} \tilde{S}^Ty_S$$.
5. Following Lemma 4, find an in-sample prediction function $f_{S}[X]$ such that
$$\frac{1}{n} \sum\limits_{i=1}^n \left| f_S(X_{i*}) - \tilde{X}_{i*}\beta \right|^2 \leq 2\lambda$$
*On step 3.* Here, we generate the set of labels $y_S$ ensuring that the ridge regression solution with respect to the RFF image of the distilled set $(S,y_S)$ is identical to the ridge regression solution with respect to the RFF image of the $(X,y)$; such a solution is referred to as $\beta$ above and also throughout our manuscript (Step 4). The intuition behind this step is to leverage the use of Theorem 2 in our context.
*On step 5.* Lemma 4 in our manuscript enables us to move from in-sample prediction defined over the distilled set with respect to the input data $X$ to the solution $\beta$ which is of high importance in our derivations. To this end, step 5 aims to ensure that there exists such an in-sample prediction function and we have ensured its existence via variable reformulation and equation-solving techniques as elaborated in lines 162–187 of our manuscript.
To obtain the provable guarantees associated with our distilled set $(S,y_S)$, we used the weak triangle inequality to split $\frac{1}{n} \sum\limits_{i=1}^n \left| y_i - f^\lambda_S(X_{i*})\right|^2$ to a linear combination of
(i) $\frac{1}{n} \sum\limits_{i=1}^n \left| y_i - f^\lambda_{\left[\tilde{\mathbf{X}}, y, \phi\right]}(X_{i*}) \right|^2$, and
(ii) $\frac{1}{n} \sum\limits_{i=1}^n \left| f^\lambda_S(X_{i*}) - f^\lambda_{\left[\tilde{\mathbf{X}}, y, \phi\right]}(X_{i*}) \right|^2$.
Observe that by construction, it holds that $f^\lambda_{\left[\tilde{\mathbf{X}}, y, \phi\right]}(X_{i*}) = X_{i*} \left( \tilde{X}^T \tilde{X} + \lambda n s_\phi \lambda I_{s_\phi}\right)^{-1}y = X_{i*}\beta$ for every $i \in [n]$. Thus, we note that (i) is bounded by Theorem 2 whereas (ii) is bounded using Lemma 4 and by the construction of our in-sample prediction function $f_S[X]$.
**Regarding the minimized loss in our experiments**
Indeed, we missed clarifying these details regarding the generated set of points in the text itself; we minimize the left-hand side of the equation below line 205, i.e., $\frac{1}{n} \sum\limits_{i=1}^n \left| y_i - f^\lambda_S(X_{i*})\right|^2$.
Following your comments, these details were added to the paper. Thanks for the careful reading.
_______
We hope that our responses above effectively address your concerns, and motivate you to consider raising your score. Should you have any further areas of improvement in mind, please don't hesitate to inform us. Your feedback would be immensely valuable as we strive to make necessary enhancements before the conclusion of the discussion period. | Summary: This paper presents a theoretical analysis of dataset distillation, specifically focusing on the size and approximation error of distilled datasets. The authors provide bounds on the sufficient size and relative error of distilled datasets for kernel ridge regression (KRR) based methods using shift-invariant kernels. They prove the existence of small distilled datasets and show that a KRR solution can be generated using these distilled datasets that approximate the solution obtained on the full input data. The theoretical results are validated through empirical experiments on synthetic and real datasets.
Strengths: (1) The paper addresses an important problem in dataset distillation by providing theoretical bounds on the size and approximation error of distilled datasets. This fills a gap in the literature where previous work has mainly been empirical.
(2) The use of random Fourier features (RFF) and kernel ridge regression (KRR) provides a solid theoretical foundation for the analysis.
(3) The paper includes both analytical proofs and empirical validation to support the theoretical results.
Weaknesses: (1) The evaluation of the proposed method could be further strengthened by comparing it with other baseline methods in the field of dataset distillation.
(2) The clarity of the exposition could be improved, as some sections of the paper are not easy to understand without prior knowledge of the topic.
(3) In line 223, two clusters and each cluster has 5000 points. Will they conflict with 10^5 points?
(4)The paper should provide more experiments on real and bigger datasets and the visualization of KRR predictive functions on the MNIST dataset.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) Can you provide more experimental results on larger real-world datasets and make appropriate visualizations to prove effectiveness?
(2) Can you provide more explanation about how understanding data distillation can guide the performance of data distillation?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not fully discussed how a better understanding of data distillation will guide and evaluate subsequent data distillation work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The reviewer's comments and expert evaluation are highly valued by us. Indeed, incorporating your feedback has already led to enhancements to the paper. We eagerly anticipate continued interaction with the reviewer during the forthcoming open discussion phase.
We have meticulously addressed each of the comments and questions raised in the initial review. We remain hopeful that, based on our comprehensive responses, the reviewer might contemplate raising the score. Should there be a need for additional clarification, please feel free to contact us without hesitation.
**Response to Comment 1 and Comment 4:**
Thanks for pointing this out. We first want to clarify that in this work, we did not propose new (competing) methods for distillation. We aimed at providing the first proof that answers the questions "Why do KRR-based distillation methods work?” and “Which guarantees about the size and error can we obtain?" Notably, this paper is the first to prove the existence of a small distilled set with its approximation error in the field of data distillation. The experiments were basically conducted to practically validate the theoretical analysis.
However, following your valuable comment and to further improve the practical justification of our work, we have added more experiments on Cifar10 (see the attached PDF) and we are running now more experiments on SVHN -- will be added once done.
For visualizing KRR predictive functions on the MNIST dataset, can you please elaborate more on that? We will be happy to do so.
Finally, our paper's objective is to establish the achievable loss bounds using the KIP dataset distillation algorithm. Given this, we do not see how we should benchmark other methods, as our derived bounds aren't pertinent to their evaluation.
**Response to Comment 2:**
This is indeed important! Following your comment, we have revised the writing of the paper, by providing more details before each theorem/claim/definition. We also added an intuition paragraph behind the proof idea; please see the "Clarity of our theoretical results" section above that provides most of the added details to our manuscript -- explaining theorems 2 and 3.
For your convenience: https://openreview.net/forum?id=XWYv4BNShP¬eId=RA4SAIBzsG
**Response to Comment 3:**
Thanks for the careful reading. We applied thresholding to the data points so they wouldn't conflict as depicted in Figure 1. This would ensure that the KRR on this data is able to distinguish between the two classes. Following this comment, we have clarified that in the paper.
------
**Answer to Question 1:**
Certainly, see responses to comments 1 and 4.
**Answer to Question 2:**
Certainly, and thanks for raising this. We first state that in general, deriving bounds on the size of the distilled sets helps researchers develop new distillation algorithms and test them on new datasets. Such bounds on the size and approximation error provide a mechanism to debug new suggested algorithms on these new datasets. Additionally, understanding the theory behind the existence of such small distilled sets (with providing proofs) is the first stepping stone towards providing provable dataset distillation techniques which do not exist at all in the literature. In this case, the final (long-term) goal is to leverage this knowledge and understanding of dataset distillation and provide algorithms that provably generate a small set $S$ that encapsulates all of the information in the input data $X$, and thus will guarantee the success of the training of a deep learning model on the distilled data.
We believe that our paper is the first step towards a better understanding of dataset distillation, which will allow the research community to provide better provable distillation algorithms and interpret the theory behind them. We hope that our work will provide the first theoretical stepping stone towards analyzing and better understanding the magic behind dataset distillation techniques (specifically KRR-based).
Furthermore, specifically in our work, we note that our theoretical derivations indicate that any set can be distilled (achieving the approximation error we provided) as long as the labels satisfy that the solution of the ridge regression which involves the mapped distilled set via random Fourier features (RFF) is equivalent to that of the solution of the ridge regression which involves the mapped input data via random Fourier features (RFF) — From a practical point of view, this serves to indicate the success of using LabelSolve (LS) [2, 3] which aims to learn the labels given a set of distilled points, which minimizes the KRR error between the distilled set and input data, and to that end, our paper can be regarded also as a theoretical justification for such a method. We also note that our method can be used to guide a distillation algorithm in finding the best labels given the distilled set, i.e., one can use our theoretical derivations to guide a distillation method in refining the labels of the distilled set, allowing for provable guarantees and better results in the case of KRR-based distillation techniques, which then one can apply bi-level based optimization which can combine both LabelSolve for instance and KIP (or RFAD [1]) using our theory to better direct these optimizers (instances and their labels) to a better-distilled set. We leave this as an open question and direction in the field of dataset distillation.
[1] Loo, N., Hasani, R., Amini, A., & Rus, D. (2022). Efficient dataset distillation using random feature approximation. Advances in Neural Information Processing Systems, 35, 13877-13891.
[2] Nguyen, T., Novak, R., Xiao, L., & Lee, J. (2021). Dataset distillation with infinitely wide convolutional networks. Advances in Neural Information Processing Systems, 34, 5186-5198.
[3] Nguyen, T., Chen, Z., & Lee, J. (2020). Dataset meta-learning from kernel ridge-regression. arXiv preprint arXiv:2011.00050.
---
Rebuttal Comment 1.1:
Comment: Thank you for taking the time to in-depth respond to my concern and for the additional evaluation. And I prefer to keep my score (5). | Rebuttal 1:
Rebuttal: We deeply thank the reviewers for providing us with both positive feedback and valuable constructive criticism. Your professional review and careful reading have already helped us improve our work. We have thoroughly addressed all the comments raised during the initial review. If further clarity is required, please do not hesitate to reach out. Your engagement is highly valued.
We thank the reviewers for providing the following **Positive feedback**:
1. Rev BMB1: “The paper is the first attempt to theoretically guarantee the existence of dataset distillation”| Rev mfyC: fills a gap in the literature| Rev piM1: “First theoretical work in dataset distillation + theoretical results are well established”| Rev uCrm: "The paper derives valid error bounds for KRR-based dataset distillation methods"
2. Rev mfyC: Experimental study justifying the theoretical bounds: The paper includes both analytical proofs and empirical validation to support the theoretical results| Reviewer BMB1: “Experimental results are seemingly consistent with the theoretical results + the experimental protocol was clear".
3. The writing is logical and consistent (Rev uCrm).
4. The contributions of this paper and how this paper inherits previous results are very clear (Rev uCrm).
5. The use of random Fourier features (RFF) and kernel ridge regression (KRR) provides a solid theoretical foundation for the analysis (Rev mfyC).
**Clarity of our theoretical results**
Following your insightful comments, we revised the writing, by providing details before each theorem/claim/definition. We also added an intuition paragraph behind the proof idea.
Specifically, we added the following before Theorem 2:
The following theorem bounds the difference (additive approximation error) between (i) The MSE loss between the ground truth labels and the predictions obtained by applying Kernel Ridge regression (KRR) on the raw (original) data, and (ii) The MSE between the ground truth labels and the predictions obtained when applying Ridge regression on the mapped (full) training data via random Fourier features (RFF).
With this in mind, the goal of Theorem 2 is to set the minimal dimension of the RFF which yields the desired additive approximation ($4 \lambda$). The intuition, in our context, behind using this theorem, is to link the dimension of the RFF with the size of the distilled set in Theorem 3.
To that end, we use this error bound and sufficient size (of the minimal dimension of the RFF) to provide proof of the sufficient small size of the distilled set. This is done in Theorem 3. We added the following details to further explain it:
*The goal of Theorem 3.* is to prove the existence of a small distilled set $S$ (its size is a function of the minimal dimension of the RFF mapping required to ensure the provable additive approximation stated in Theorem 2) satisfying that:
(i) The Ridge regression model trained on the mapped training data via RFF is identical to that of the Ridge regression model trained on the mapped small distilled set via RFF,
(ii) more *importantly* there exists a KRR solution formulated for $S$ with respect to the loss of the whole big data $X$, which approximates the KRR solution on the whole data $X$ (which is the goal of KRR-based dataset distillation techniques). Thus,
(iii) we derive bounds on the difference (approximation error) between (1) The MSE between the ground truth labels of the full data and their corresponding predictions obtained by the specific KRR model (we previously described) on our distilled set and (2) The MSE between the ground truth labels and the predictions obtained when applying KRR on the whole data $X$.
*Main idea.* The heart of our approach lies in connecting the minimal dimension of the RFF required for provable additive approximation and the size of the distilled set. This is first done by showing that the distilled set can be any set $S$ of instances from the input space (e.g., images) and their corresponding labels, *as long as the corresponding labels must maintain a certain property*. Specifically speaking, the labels of the distilled set need to be in correlation with the normal of the best hyperplane found to fit the mapped training data via RFF $\tilde{\mathbf{X}}$ via the Ridge regression model trained on $(\tilde{\mathbf{X}},y)$, i.e., $(\tilde{\mathbf{S}}^T\tilde{\mathbf{S}} +$ $\lambda n$ $s_\phi$ $\lambda$ $I_{s_\phi})$ $(\tilde{\mathbf{X}}^T$ $\tilde{\mathbf{X}}$ + $\lambda n s_\phi$ $\lambda I_{s_\phi}$ $)^{-1}$ $\tilde{\mathbf{X}}^T y$ $= \tilde{\mathbf{S}}^T y_{\mathbf{S}}$.
From here, the idea hinges upon showing the existence of a KRR model (represented by a prediction function) that would be dependent on the prediction function that can be obtained from applying the Ridge regression problem to the mapped full training data via RFF.
With such a model, the idea is to retrieve the predictions obtained when using the Ridge regression problem from the mapped training data via RFF via the use of some KRR model used on the distilled set.
We thus show that through careful mathematical derivations, equation reformulation (involving $\beta$), and solving a system of equations, one is able to show the existence of a KRR solution that would allow us to use Theorem 2. Finally, to obtain our bounds, we also rely on the use of the weak triangle inequality.
To that end, we now utilize the described KRR model on the distilled data together with Theorem 2 to achieve (iii).
*For Remark 6*, we note that it is an immediate result of Theorem 3. In Theorem 3, the approximation error is a function of a parameter $\tau$ that is related to which version of the weak triangle inequality is being used. Setting $\tau = 2$, we achieve the bound in Remark 6., which is a simplified version of Theorem 3.
Finally, for better readability and completeness, we have restated the proof of Theorem 2 from [LTOS21] in the appendix of the paper.
Thanks for the valuable comments.
Pdf: /pdf/848431e5424a20199afa73b7269f1e14aadce809.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Implicit Contrastive Representation Learning with Guided Stop-gradient | Accept (poster) | Summary: The paper proposes the implicit contrastive learning algorithm, which uses the guided stop-gradient to push away negative samples without the uniformity term in the contrastive loss. By applying the method to the non-contrastive methods including SimSiam and BYOL, the paradigm combines the advantages of contrastive and non-contrastive algorithms and boosts the downstream performance in various downstream tasks.
Strengths: 1. The improvements brought by GSG are significant and the experiments are solid.
2. The main idea and insights of GSG are easy to follow.
3. The combination between the asymmetric architectures and contrastive loss is interesting.
Weaknesses: 1. Besides the guided stop-gradient, simply combing the asymmetric architectures and the contrastive loss (e.g., InfoNCE) seems would have the same effect. So what is the advantage of implicit contrastive loss? Is it possible that the advantages of GSG with small batch sizes are brought by the asymmetric architecture? More comparisons between the implicit contrastive loss and the explicit contrastive loss would make this paper more solid.
2. In Section 6.2, the authors show that GSG contributes to training stability by removing the predictors. However, the final accuracy is still far from the results with the predictor. Is it possible to show that the GSG stabilizes the training process of SimSiam with the predictor? It would be better to show more differences between SimSiam and SimSiam with GSG during the training process.
3. When selecting where to apply stop-gradient, the paper uses the distance in the projection layer. However, the loss is calculated in the prediction layer. Besides the brief explanations on page 4, it would be better to provide more theoretical or empirical evidence.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: [W1] asymmetric architecture + contrastive loss
MoCo is an algorithm that combines asymmetric architecture (stop-gradient, momentum encoder) and contrastive loss (InfoNCE). We refer the reviewer to Figure 2c in [1]. As shown in Table 4, even in the case of MoCo, the performance is not good when the batch size is small. The advantage of implicit contrastive loss is that, unlike contrastive loss, it works well with small batch sizes. From this example, we can see that this advantage comes from the implicit application of contrastive learning rather than from asymmetry.
---
[W2] training stability
As described in Lines 282-285 and Figure 4c, SimSiam w/ GSG shows a more stable learning curve in the training process. To quantify this, we measure how much the accuracy $x_t$ fluctuates at the beginning of the training. We take the first difference $y_t=x_t-x\_{t-1}$ and find the standard deviation of $\\{y_t\\}_{t=1}^{50}$. As can be seen in the following, SimSiam w/ GSG has the smaller standard deviation and is therefore more stable. Please also note that in the case of SimSiam w/ Reverse SG, it collapsed.
|Algorithm|Std|
|-|-|
|SimSiam|11.306|
|SimSiam w/ Random SG|9.755|
|SimSiam w/ Guided SG|**5.527**|
---
[W3]
Since the predictor $h$ is a two-layer MLP head, it can be expressed as $h = f_2 \circ \phi \circ f_1$, where $f_1$ and $f_2$ are affine functions ($x \mapsto Ax+b$) and $\phi$ is the ReLU activation function ($x \mapsto \max(0, x)$). It is known that affine functions and ReLU are Lipschitz continuous. Note that a function $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ is Lipschitz continuous if there exists a constant $L$ such that $\lVert f(x) - f(y) \rVert_2 \leq L \lVert x- y \rVert_2$ for all $x, y \in \mathbb{R}^n$. Since the composition of Lipschitz continuous functions is also Lipschitz continuous, $h$ is Lipschitz continuous. So where $\lVert z - z' \rVert_2$ is small, $\lVert h(z) - h(z') \rVert_2$ is also small.
---
[1] He, Kaiming et al., Momentum contrast for unsupervised visual representation learning, 2020, CVPR.
---
Rebuttal Comment 1.1:
Title: reply to the rebuttal
Comment: I thank the authors for the responses. I will maintain my original rating. | Summary: This article presents a way to improve the learning of non-contrastive self-supervised learning methods such as BYOL and SimSiam.
It uses incorporates implicitly contrastive notions, by removing elements of the loss that may lead to close representations collapsing together. This modification is proven to improve the representations for BYOL and SimSiam with different experiments. In particular, variations of the algorithm with other Stop Gradients are shown to be detrimental to learning, and the utility of the algorithm to prevent collapse at low batch sizes or without projectors is shown.
Strengths: The presented method is simple but provides consistent improvement for both BYOL and the SimSiam learning methods.
In particular, the method improves low batch sizes as well as when the predictor is removed.
The method is ablated to consider different Stop Gradients variations.
The article is clear and well written.
Weaknesses: I find the article a bit lacking in ablations and inquiries about the method. For instance:
- I would have liked a variation of the algorithm with more than only 2 examples. Do using N examples for the implicit contrast improves further the method?
- And the method prevents collapse without a predictor, but the improvement is more unclear in the general case. Some measure of dimensionality for instance, or something similar, would have helped demonstrate the improvement of the method is due to helping a potential collapse of the representations. (I do not find the t-SNE visualisations to be convincing)
I am surprised by the accuracy results presented for the benchmark of the method. Exploring Simple Siamese Representation Learning, Chen et. al, 2021 provide in Table 4 accuracies for SimCLR and Moco which are much closer to the SimSiam accuracies than the one presented in this article. In particular, the End-to-End accuracies seem very low. For these reasons, I am a bit unconvinced by the benchmark.
I found the 3 Figures redundant for explaining the method. Similarly, I found the method to be a bit overexplained.
The paper in its present state is a bit light, which is why I am only considering a weak accept for now.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Computing distances between representations is quite costly. I would expect a slowdown due to the computation for the method. Is it present, or is it compensated by the reduced number of computations needed by the Stop Gradients?
Have other alternatives, such as reweighting the "attracting" part of the loss rather than simply removing them been considered?
Do the authors have some hypotheses about the links between this work and explicit contrastive learning methods?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations of the article, albeit in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: [W1-1] using $N$ examples
To use $N$ examples at once when constructing the loss, we need to create decision criteria like Equation (6) that considers $4{N \choose 2}$ distances together. We think it is hard to be done by a straightforward extension of our idea since when deciding which side to apply stop-gradient for each example, $N-1$ results from comparison with other examples may not be consistent.
In our idea, we drive representations away from each other by pairing each example with another. So it is natural to consider two examples at a time when constructing the loss. Since we calculate the cost by aggregating the losses over the batch, we after all use as many examples as the batch size in each gradient update.
---
[W1-2] measure of dimensionality
Please see [G1] in the global response above.
---
[W2] accuracy for the benchmark
As for the performance of the benchmark algorithms in Table 4, we used those reported in SimCLR and MoCo's original papers (please refer to Table B.1 in [1] and Figure 3 in [2]). The SimCLR and MoCo accuracies in Table 4 in [3] you mentioned are obtained under different settings than ours.
For SimCLR, the higher accuracy is that of a much larger batch size 4096. This again shows that the performance is greatly affected by the batch size in the case of explicit contrastive learning.
For MoCo, the performance the table is reporting is for MoCo v2 [4], which is an improved version of MoCo. Also, in the MoCo framework, negative samples are from a dictionary decoupled from the batch, but the dictionary size is not written in the paper.
---
[Q1] distance computation cost
There is additional computation due to computing distance. However, it is a vectorized computation that is performed once for each iteration. Passing through a ResNet is a more dominant cost.
---
[Q2] reweighting the loss terms
The experiment in Section 6.1 can be seen as the reweighting. When we construct the loss, we need to choose one from $L_1=\\{\frac{1}{2}\mathcal{D}(p_{11}, \text{sg}(z_{12})), \frac{1}{2}\mathcal{D}(p_{12}, \text{sg}(z_{11}))\\}$ and one from $L_2=\\{\frac{1}{2}\mathcal{D}(p_{21}, \text{sg}(z_{22})), \frac{1}{2}\mathcal{D}(p_{22}, \text{sg}(z_{21}))\\}$ (refer to Figure 2a).
This can be relaxed to put two-point distribution over each set. For each set, if we give a probability of 1 to the term chosen with the idea of GSG and 0 to the other term, it becomes GSG. Conversely, if probabilities of 0 and 1 are given, it becomes Reverse SG, and if probabilities of 0.5 and 0.5 are given, it becomes Random SG.
---
[Q3] link between implicit and explicit contrastive learning
Please see [G2] in the global response above.
---
[1] Chen, Ting et al., A simple framework for contrastive learning of visual representations, 2020, ICML.
[2] He, Kaiming et al., Momentum contrast for unsupervised visual representation learning, 2020, CVPR.
[3] Chen, Xinlei and He, Kaiming, Exploring simple siamese representation learning, 2021, CVPR.
[4] Chen, Xinlei et al., Improved baselines with momentum contrastive learning, 2020, arXiv preprint arXiv:2003.04297.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answers to my questions, notably on the benchmark and the computation cost.
- For my question about using $N$ examples, I meant a generic value of examples and not only 2. I apologize for the use of $N$ as I did not mean the batch size in particular. Even using 3 examples could allow for a more complex formulation, by increasing the number of distances considered and thus the strength of the implicit contrast.
- The addition of the relative variance does show an improvement of dimensionality.
Some of my concerns have been answered, however I still find the article light since they are no ablations or extensions of the method. This method is still effective at low batch sizes which makes it a worthwile intermediate between contrastive and non-contrastive learning, so I am keeping my rating as a weak accept.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer imWm
Comment: We thank the reviewer for the response. We hope the following addresses the reviewer's concerns.
---
Our ablation study can be found in Section 6. There we provide:
1. To find out the effect of **Guided SG**, it was compared with other variants, Random SG and Reverse SG.
2. To find out how much the **predictor** contributes, it was compared with the case where the predictor was removed.
3. By comparing SimSiam and BYOL, the influence of the **momentum encoder** (only in BYOL) could be investigated.
Note that the components of SimSiam w/ GSG are two encoders, one predictor, and the GSG method. In the case of BYOL w/ GSG, a momentum encoder is one of the two encoders. Therefore, we have dealt with the major components in our ablation study.
---
The extension the reviewer mentioned requires bringing in other ideas to deal with new situations.
For instance, if there are 3 examples, there are 2 reference examples $x_2$ and $x_3$ for one example $x_1$. When determining which of the two projections $z_{11}$ and $z_{12}$ to apply stop-gradient, the result considering the relationship with $x_2$ and the result considering the relationship with $x_3$ may be different. That is, one may be $z_{11}$, and the other may be $z_{12}$. At this time, we need another criterion to break the tie. More situations must be considered if we consider more than 3 examples simultaneously.
However, introducing these additional ideas is not a straightforward extension. We believe different papers require different levels of extension. Since this paper aims to propose a simple but transformative idea, trying to add more to the current algorithm can make it overly complex and is beyond the scope of the paper. | Summary: This paper proposes a novel SSL technique that can be applied on top of SimSiam or BYOL to select where to apply asymmetric predictor. This method first computes embeddings (before the predictor) and compute relevant distances. Then based on this distance, it chooses where to apply the predictor. Experiment results show improvement of this method over SimSiam/BYOL baseline.
Strengths: 1. The idea is well motivated, based on the dynamics of SimSiam/BYOL representation space.
2. Experiment shows the effectiveness of the proposed method. I appreciate full evaluation including linear probe, kNN, transfer learning, and detection.
3. Ablation of several different strategies for choosing predictors is very insightful.
Weaknesses: 1. Overall the innovation is limited. It is an interesting trick over SimSiam/BYOL, but not transferrable to general SSL architecture.
2. Experiment is questionable. Using batch size 512 is not considered optimal with ImageNet pre-training. In fact, the imagenet linear probe numbers reported for the BYOL baseline are lower than the original paper, with this batch size (it is supposed to be only -0.5% lower than using 4k batch size). I understand that it is only trained on 8 NVIDIA A100 GPUs. But it is possible that hyperparameters are no longer optimal.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Does this method effectively double the batch size? How exactly each `pairs` of images are sampled? What is memory/compute overhead?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: [W1] generalizability
Asymmetry is an important topic in recent self-supervised representation learning [1]. Algorithms that utilize asymmetry such as SimSiam, BYOL, SwAV, and DINO are continuously emerging. In this paper, we showed that asymmetry, which was previously introduced to prevent collapse, can help improve performance if exploited more actively. Since this is the first paper in this direction, we first applied our ideas to the relatively simple SimSiam and BYOL. However, we believe that the idea of implicitly performing contrastive learning using asymmetry can be applied to other algorithms in a different way or provide a clue to the emergence of new algorithms.
---
[W2] performance in the original paper of BYOL
We guess the performance of BYOL in the original paper you mentioned is from Table 1 in [2]. The performance in the table is the result of a different setting including longer training (please refer to Section 3.3 in [2]). Our paper focuses on the relative performance gap between the algorithms. So the set of hyperparameters of the algorithms are all set to that of the original SimSiam paper for apple-to-apple comparison.
---
[Q1] sampling pairs of images
We put the given batch and a shuffled batch side by side and pair the images one by one (please refer to Appendix A). The batch size remains the same because negative samples are paired within a given batch.
---
[1] Wang, Xiao et al., On the importance of asymmetry for siamese representation learning, 2022, CVPR.
[2] Grill, Jean-Bastien et al., Bootstrap your own latent-a new approach to self-supervised learning, 2020, NeurIPS.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response.
All my questions are addressed. I will update my score accordingly. | Summary: The paper introduces the Guided Stop-Gradient (GSG) method that can be applied to SSL algorithms that adopt asymmetric dual encoders such as BYOL and SimSiam in order to boost their performance and stabilize their training. The idea of the GSG is to augment the loss function to attract different views of two different images instead of just one as it is done in BYOL and SimSiam. However, the method does not explicitly repel the representations of the views of the different images but carefully selects which positive views to attract with a stop-gradient operation. Experiments with pretraining are performed on BYOL and SImSiam models using ImageNet and CIFAR10 datasets and their trained model is applied on several downstream tasks.
Strengths: - The method is simple and practical to implement.
- Results are consistent and demonstrate increased performance on pretraining and downstream tasks.
- Using this method, models can be trained with lower batch size which is a plus.
Weaknesses: I believe that the paper in the current form lacks analysis (and intuition) that would help understanding why the method works better than the baselines. For example, I would like to see more analysis on the structure of the learned representation space, and the effect of the choice of the batch size and number of “negative” images added to the loss. Given that the experiments are performed on smaller datasets, this is feasible to add. See also my questions below.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have the following questions:
1. I find that choices of k in Section 5.1. unintuitive. In the case of CIFAR10, wouldn’t a large k give more insights into how well the latent space is separated? On the other hand, using k=200 for ImageNet seems very large. Could you report the variation in labels for the retrieved neighbors, i.e., is the majority vote significant? This would be interesting to analyze also in case of CIFAR10 for larger k.
2. In linear probing, you train the classification layer for the same amount of epochs as the backbone and on a much larger batch size, which sounds a lot. Is there a reason for increasing the batch size to 4096? Have you tried with just a few iterations and smaller batch size?
3. I was wondering if you could add some discussion and analysis that would help better understand why GSG works well. It would be particularly interesting to understand why GSG isn’t affected by changes in the batch size. My hypothesis is that this is because the model isn’t explicitly penalized for misplacing (potentially many) negative examples. This is partially addressed in Appendix E, however, I find tSNE visualization subjective. Do you have any numbers supporting the separation of clusters, for example, std of representations of each class? Another experiment could be to add more images to your loss (x3, x4, …), would that hurt or boost the performance?
4. I am wondering what is the dimension of the representation space? Just to be clear, in my understanding you use z in all your experiments?
Minor:
5. In Section 5.2. it is not clear whether you finetune the backbone or use the frozen model.
6. When applying random stop-gradient in Section 6.1, do you randomly choose two terms out of 4 or do you randomly choose one term for each of the images? i.e, randomly choosing either first or second term and either third or fourth in Equation 2?
7. I believe that the claim in line 284 that the fluctuation of the accuracy in the beginning of the training of SimSiam is less severe when the model is trained with GSG is too strong without supporting numbers. Could you add some variance analysis?
8. Section 3 is helpful for understanding the idea of the model but it would be great if you could make it explicit that x1 and x2 are different images.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Code is provided but the paper checklist is not. Societal impacts nor limitations are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: [W1] analysis on the learned representation space
Please see [G1] in the global response above.
---
[Q1] choices of $k$ for $k$-NN
We tried to make the experimental settings identical to previous studies for easy and fair comparison. So for the value $k$, we used the default value in [1] for the CIFAR-10 experiments and the value used in [2, 3] for the ImageNet experiments. It is hard to report the label variation for the retrieved neighbors since it varies per example. Table 11 of [3] also shows the results for $k=20$ in ImageNet experiments. So we also report the results for $k = 20$ below. Performance was still better when GSG was applied.
|Algorithm|$k$-NN acc. (%)|
|-|-|
|SimSiam|55.6|
|SimSiam w/ GSG|**62.8**|
|BYOL|60.8|
|BYOL w/ GSG|**66.2**|
---
[Q2] number of iterations and batch size for linear evaluation
For linear evaluation, we used the same number of iterations and batch size as in SimSiam's paper (please refer to Section A in [4]). In linear evaluation, even after a few iterations, the accuracy comes close to the final accuracy. For example, in the case of SimSiam, the accuracy after 90 epochs is about 67.9% and the accuracy after 10 epochs is about 64.3%. We also tried batch size 256, and it gave a slightly lower accuracy (~1%).
---
[Q3] analysis and discussion on why GSG works well
Please see [G1] and [G2] in the global response above. Regarding adding more images to the loss, it is hard to scale naturally, given the nature of our method of comparing distances between projections. If there are N examples, we need to compare $4{N \choose 2}$ distances. Also, when considering one example, the results obtained by comparing with the remaining N-1 negative examples may not be consistent.
---
[Q4] dimension of the representation space
In many algorithms (SimSiam, BYOL, SimCLR, etc.) including ours, the encoder is a backbone plus a projector, which is a shallow MLP head. In actual evaluation, we use the representations (from the backbone), which are closely related to the projections (from the projector). Attaching a projector after the backbone is a common practice to improve performance. We refer the reviewer to Line 108 and Footnote 2. So the dimension of the representation space is same to the input dimension of the projector. For CIFAR-10, it is 512, and for ImageNet, it is 2048 (please refer to Lines 42 and 49 in Appendix).
---
[Q5] transfer learning evaluation mode
We used the frozen model (please refer to Line 232).
---
[Q6] random stop-gradient
We randomly chose one of the four equations in Equation (6).
---
[Q7] supporting numbers for the fluctuation of the accuracy
To measure the fluctuating degree of the accuracy $x_t$ at the beginning of training, we take the first difference $y_t=x_t-x\_{t-1}$ and find the standard deviation of $\\{y_t\\}_{t=1}^{50}$. The following shows that SimSiam w/ GSG has a much smaller standard deviation. Thus, its accuracy fluctuates less than other algorithms.
|Algorithm|Std|
|-|-|
|SimSiam|11.306|
|SimSiam w/ Random SG|9.755|
|SimSiam w/ Guided SG|**5.527**|
---
[Q8] $x_1$ and $x_2$ are different images.
Thank you for your suggestion. We will add this in the revised version.
---
[L1] paper checklist, societal impacts, limitations
In this NeurIPS, the paper checklist was not attached to the back of the paper, but was to be filled in on the OpenReview system. You can see it at the top of this page. Societal impacts and limitations are discussed in Section G.
---
[1] Susmelj, Igor et al., Lightly, 2020.
[2] Wu, Zhirong et al., Unsupervised feature learning via non-parametric instance discrimination, 2018, CVPR.
[3] Caron, Mathilde et al., Deep clustering for unsupervised learning of visual features, 2018, ECCV.
[4] Chen, Xinlei and He, Kaiming, Exploring simple siamese representation learning, 2021, CVPR.
---
Rebuttal Comment 1.1:
Title: reply to the rebuttal
Comment: I thank the authors for the thorough rebuttal. I especially appreciate the discussion and results in [G1] and [G2], and the extra results to answer my questions. Based on these comments I recommend to accept the paper. I increased my rating accordingly. | Rebuttal 1:
Rebuttal: Dear reviewers,
We thank you for your careful reading and constructive feedback. Your comments will help us improve the quality of the paper. We have detailed responses to each reviewer individually below. We also write responses to some common questions here. We write [Gx], [Wx], [Qx], and [Lx] for a global response, weakness, question, and limitation reference, respectively. If additional explanations are needed, we will be happy to provide them.
---
[G1] analysis of why our method works well
In addition to qualitative analysis, t-SNE in Section E, we performed the following quantitative analysis to support this. We first defined between-class, within-class, and relative variance and investigated whether our method would improve representation quality in terms of the relative variance.
Let $\mathcal{X}$ be the set of all representations, and $N = \vert \mathcal{X} \vert$. For each label $i$ ($1 \leq i \leq K$), let $\mathcal{C_i}$ be the set of all representations with label $i$, and $N_i = \vert \mathcal{C_i} \vert$. Then, the total mean $\mathcal{X}$ and the class mean $\mathcal{C_i}$ are written as
$\bar{x} = \frac{1}{N} \sum_{x \in \mathcal{X}} x, \quad \bar{x_i} = \frac{1}{N_i} \sum_{x \in \mathcal{C_i}} x.$
We define the between-class variance $v_b$ and the within-class variance $v_w$ as follows.
$v_b = \frac{1}{K} \sum_{i \in [K]} d(\bar{x_i}, \bar{x}), \quad v_w = \frac{1}{K} \sum_{i \in [K]} \left( \frac{1}{N_i} \sum_{x \in \mathcal{C_i}} d(x, \bar{x}_i) \right)$,
where $[K]=\\{1,2,\cdots,K\\}$, and $d(\cdot, \cdot)$ is the Euclidean distance. So $v_b$ is the average distance between a class mean and the total mean, and $v_w$ is the average distance between a representation and its class mean. Then, the relative variance $v_r$ is written as
$v_r = v_b / v_w.$
The following is the relative variance obtained for the algorithms.
|Algorithm|Relative Variance|
|-|-|
|SimSiam|2,323|
|SimSiam w/ GSG|**2.821**|
|BYOL|2.458|
|BYOL w/ GSG|**3.049**|
The result shows our method increases between-class variance relative to within-class variance. Note also that this is similar to the goal of linear discriminant analysis (LDA) [1].
---
[G2] discussion on why our method works well
Self-supervised learning (SSL) can basically be seen as harnessing two forces. One is attracting force between positive pairs, and the other is repelling force between negative pairs (if any). Where these two forces are in balance, representations are formed.
- If the attracting force is too large, all representations come together (collapse).
- If the repelling force is too large, it repels even the positive pairs (sampling bias from not knowing the labels).
First, contrastive learning gained momentum in SSL research, but it had drawbacks such as performance degradation due to sampling bias and the need for a large batch size to secure many negative samples.
Later, non-contrastive learning using only positive pairs, such as SimSiam or BYOL, gained more and more attention, but these algorithms raise the question of whether there might be room for improvement by using a contrastive effect.
Our implicit contrastive learning uses negative sampling like contrastive learning, but on the surface, it uses only the attracting force like non-contrastive learning. So it can be seen as an attempt to find a sweet spot between these two domains. To this end, we leverage asymmetry, which many non-contrastive learning algorithms introduce to prevent collapse initially.
As shown in Section 5.3, our algorithms performed more robustly than the contrastive learning algorithms at small batch sizes. In many explicit contrastive learning algorithms, N-1 negative examples are explicitly repelled from one example. However, in our method, one negative example is paired with one example in an implicit way. This may be why batch size reduction does not affect our algorithms much.
Also, as shown in Sections 5.1-2, it performed better than the non-contrastive learning algorithms. This may be because our method contributes to better separation of clusters, as shown in t-SNE and [G1]. As such, we confirmed the untapped potential of the use of asymmetry through this study.
---
[1] Härdle et al., Applied multivariate statistical analysis, 2019, Springer Nature. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Multitask Learning for Face Forgery Detection: A Joint Embedding Approach | Reject | Summary: The proposed method introduces a novel approach to deepfake detection by integrating natural language and image information. Moreover, it attains state-of-the-art performance on several contemporary deepfake datasets and can generate explanatory sentences that justify the authenticity or falsity of the input image, which is crucial in the field of deepfake detection.
Strengths: See Questions Section in detail.
Weaknesses: See Questions Section in detail.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Although the proposed method demonstrates remarkable performance on contemporary deepfake datasets, the supplemental materials reveal that without the proposed dataSup scheme, the method can only achieve 80.76 and 75.94 AUC on Celeb-DF and DFDC, respectively, which is inferior to state-of-the-art methods.
Based on this, I have the following reservations about the submitted manuscript:
1. It would be preferable for the authors to compare the performance of their method with the proposed dataSup scheme and previous methods such as Face X-Ray, SBI and SLADD using a unified backbone and then analyze the impact of different dataSup schemes.
2. The data augmentation scheme is essential according to Table 2 in the supplemental materials. Therefore, I recommend that the authors elaborate on their dataSup scheme in detail as the current description of the scheme is obscure and difficult to replicate.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations are well illustrated in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. It would be preferable for the authors to compare the performance of their method with the proposed dataSup scheme and previous methods such as Face X-Ray, SBI and SLADD using a unified backbone and then analyze the impact of different dataSup schemes.**
**A1:** Thanks for the excellent comment. As suggested by the reviewer, we provide more comparison results with a unified backbone but different dataSup schemes. The results in the following table further validate the effectiveness of the proposed dataSup scheme over those in Face X-Ray and SBI.
| DataSup Scheme | CDF | FSh | DF-1.0 | DFDC | Mean AUC |
| ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |
| Face X-Ray | 84.34 | 98.36 | 91.12 | 77.88 | 87.93 |
| SBI | 87.39 | 98.31 | 92.09 | 79.02 | 89.20 |
| Ours | **89.02** | **98.68** | **93.38**| **82.06** | **90.79** |
**Q2. The data augmentation scheme is essential according to Table 2 in the supplemental materials. Therefore, I recommend that the authors elaborate on their dataSup scheme in detail as the current description of the scheme is obscure and difficult to replicate.**
**A2:** Thanks for the comment. We provide more details as follows, and we will also make the source code regarding the dataSup scheme publicly available for reference.
**Expression-Eye**. (i) Choose **fake** data from Deepfakes and FaceSwap on FF++ with larger facial and expression modifications, particularly in the eye region, compared to the original faces.
(ii) Given a real face on FF++ as the background face, we directly use the chosen fake face image on FF++ to supply the fake eye part(s) as the foreground;
(iii) Generate the region-of-interest mask, i.e., the mask of the eye(s), based on the background face landmarks;
(iv) Apply the color correction on the foreground;
(v) Blend the background and the foreground according to the region-of-interest mask, in which we follow Face X-ray to adopt the Gaussian blurred binary mask.
**Physical inconsistency-Eye/Mouth/Nose**. (i) Given a real face on FF++ as the background face, we search for the nearest **real** face images (excluding the real faces with the same ID) as the foreground to provide the local fake part(s);
(ii) Generate the region-of-interest mask, i.e., the mask of the eye(s), mouth, or nose, based on the background face landmarks;
(iii) Apply the color correction on the foreground;
(iv) Blend the background and the foreground according to the region-of-interest mask.
**Physical inconsistency-illumination.** (i) Given a real face on FF++ as the background face, we search for the nearest **real** face images (excluding the real faces with the same ID) as the foreground face;
(ii) Generate the whole face mask based on the face landmarks of the background face;
(iii) Apply illumination inconsistency operation on the foreground face, in which we have three alternatives: 1) random brightness + color correction, 2) random brightness, and 3) no correction. Detailed combinations are listed in Table 1 of the Appendix;
(iv) Blend the background and the foreground according to the whole face mask.
**Physical inconsistency**. (i) Given a real face on FF++ as the background face, we search for the nearest **real** face images (excluding the real faces with the same ID) as the foreground face;
(ii) Generate the whole face mask based on the face landmarks of the background face;
(iii) Apply the color correction on the foreground face;
(iv) Blend the background and the foreground according to the region-of-interest mask.
---
Rebuttal Comment 1.1:
Comment: The additional experiments show the effectiveness of the proposed method and the detailed data augmentation description is given. However, I prefer to keep the original decision (borderline accept). | Summary: This paper proposes a multitask learning framework for video deepfake detection. The idea is to rely on a joint embedding architecture and define a set of coarse-to-fine face forgery detection tasks with corresponding textual descriptions for fake face images (binary level, global-attribute level and local-attribute level). This helps to obtain understandable explanations and hence a more interpretable forensic detector. CLIP is used to implement the joint embedding architecture, while ViT-B/32 is adopted as the visual encoder and GPT-2 as the text encoder. Experiments are carried out on several publicly available datasets and show better performance in terms of generalization compared with SOTA methods.
Strengths: - It is very relevant to design a deepfake detector that is able to generalize to different types of manipulations since often current deepfake methods perform poorly on forgeries not seen during training.
- It is also very valuable to design a detector that can provide explanations about the manipulations.
- The idea to encode the ground-truth labels via language prompts is interesting and not explored yet in the context of deepfakes.
Weaknesses: - The technical description of the method based on multitask learning (Section 3.3) is very generic and not related at all with the problem of deepfakes. In addition, the technical contribution seems to come from already published work: the joint embedding formulation is inspired by minimizing an energy-based model as in [39] and the losses for multitask Learning are inspired by [73].
- The section on Multitask Language Prompts (3.2) is more related to the specific application, but it is not justified why it is important to consider a coarse-to-fine approach and above all it is not clear how the ground-truth labels via language prompts have been generated. It is said 'Face attribute manipulations associated with other textual prompts are already included in FF++.', but this is new to me. FF++ is only labelled using four different manipulations but does not include the global-attribute level and local-attribute level as described in Section 3.2. This is absolutely not clear.
- In this same section there is a reference to a face attribute called 'physical consistency', which is explained in the supplemental material. However, Section 1 of the appendix is very confusing and I was not able to understand it clearly.
- The ablation study is confused. There are several variants that perform well as the proposal in Table 1 and this is puzzling.
- Comparison with SOTA methods should be enlarged including also other methods, such as [66] and [19].
- The experiments that show that the explanations provided by the detector are correct are too limited. They have been shown only on FF++ (Section 4.5 and Appendix). What is more interesting is the ability to generalize to other datasets different from the training one and these are not present in the paper. This is very limiting and does not help to show the relevance of the proposal as stated in the Introduction.
- The paper needs a major re-writing. The presentation is poor and hence not adequate for NeurIPS.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please, refer to the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Authors have presented the limitations of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. The technical description of the method based on multitask learning (Section 3.3) is very generic and not related at all with the problem of deepfakes. In addition, the technical contribution seems to come from already published work: the joint embedding formulation is inspired by minimizing an energy-based model as in [39] and the losses for multitask Learning are inspired by [73].**
**A1**: We respectfully disagree with the comment and kindly refer the reviewer to the general response. In short, the most significant contribution is defining a set of coarse-to-fine face forgery detection tasks based on face attributes at different semantic levels. This naturally leads to a multi-task learning setting, which is implemented by a joint embedding approach with several desirable properties regarding semantic encoding, automation, and explainability. The CLIP and the fidelity loss are our instantiations and can be changed to other plausible choices. As pointed out by the reviewer, our approach is generic and can be adapted to other vision problems, which we consider as a huge advantage.
**Q2: The section on Multitask Language Prompts (3.2) is more related to the specific application, but it is not justified why it is important to consider a coarse-to-fine approach and above all it is not clear how the ground-truth labels via language prompts have been generated. It is said 'Face attribute manipulations associated with other textual prompts are already included in FF++.', but this is new to me. FF++ is only labelled using four different manipulations but does not include the global-attribute level and local-attribute level as described in Section 3.2. This is absolutely not clear.**
**A2**: We have justified experimentally the importance of such a coarse-to-fine approach in Table 3 of the main paper. We have also elaborated on how to encode the ground-truth labels via language prompts through the example in Figure 1 of the main paper. The face image in Figure 1 is a fake face with two attributes altered in different semantic levels, i.e., expression (global) and mouth (local). According to Section 3.2, we have defined nine attributes (including real and fake) for describing the face image in the task of face forgery detection. If we use “1” to represent the fake label and “0” as the real one, the ground-truth label for the face image in Figure 1 is represented by a one-hot label, i.e., 011000010 (ordered in [real, fake, expression, identity, physical consistency, eye, illumination, mouth, nose]). We further use textual templates to encode the ground-truth label, where each language prompt of nine templates is defined in Section 3.2. According to the descriptions of each manipulation method in [1, 2, 71, 72] and the notations defined in the FF++ project page, it is straightforward to easily infer the manipulations contained in FF++ from the global-attribute level and local-attribute level.
**Q3. In this same section there is a reference to a face attribute called 'physical consistency', which is explained in the supplemental material. However, Section 1 of the appendix is very confusing and I was not able to understand it clearly.**
**A3**: The concept of physical (in)consistency is not new in the field of photo forensics [R1]. In our case, we create such inconsistency by blending a **real** background with a **fake** foreground (can be the whole face or a face part) generated by a DeepFake algorithm or cropped from another **real** face image of a different ID. We have explained it in lines 7-16 in Section 1 of the Appendix, with examples of some types of this manipulation in lines 17-20 of Section 1 of the Appendix. To aid understanding, the reviewer thinks of this fake attribute as a form of data augmentation, like the one used in Face X-ray [41] and ICT [19].
[R1] Photo Forensics, Hany Farid, MIT Press.
**Q4. The ablation study is confused. There are several variants that perform well as the proposal in Table 1 and this is puzzling.**
**A4**: In the ablation study, we have investigated the impact of different settings of our proposed method and provided the generalization performance on various datasets as evaluation criteria. Although some settings showed reasonable performance, our method with the default setting outperforms them overall. There is only one exception - the use of ViT-B/32 or ViT-L/14 as backbones. We have discussed in the main text why we do not choose them as the default setting in the subsection of the Encoder Architecture of Section 4.4. We kindly request the reviewer to carefully read the explanation in Section 4.4.
**Q5. Comparison with SOTA methods should be enlarged including also other methods, such as [66] and [19].**
**A5**: In Table 1 of the main paper, we have already compared our method with many recent state-of-the-art methods, including both [66] and [19], as requested by the reviewer.
We kindly request the reviewer to carefully read the main text in Section 4.2 and Table 1 of the comparison results with the SOTA.
**Q6. The experiments that show that the explanations provided by the detector are correct are too limited. ... This is very limiting and does not help to show the relevance of the proposal as stated in the Introduction.**
**A6**: The reason for selecting FF++ as the dataset for showcasing examples is that FF++ provides clear descriptions regarding how to use the four included methods to manipulate each image. This provides sufficient grounds for us to relabel the data at both the global-attribute level and local-attribute level. Moreover, using known samples to assess the interpretability of predictions is commonly used in many other literatures [8, 19, 26, 66, 80]. As for the other datasets, which only have global binary annotations, we have comprehensively tested the generalization of our model trained on the FF++ variant in Table 1 of Section 4.2.
---
Rebuttal Comment 1.1:
Comment: After reading the response from the authors to my comments and to the other reviewers' comments, I increased my score from reject to borderline reject. In fact, authors successfully addressed some of my concerns, however I still believe that the technical contribution is not sufficiently significant for NeurIPS. I also believe that a solution that is proposed for face forgery detection (this is said in the title) should be tailored to this specific task. If it is general, which is considered a plus from the authors, then it should be tested also for other tasks in order to show its relevance also in other applications. Then, note that comparison with [66] and [19] are present in Table 1 but not in Table 2 where robustness is analyzed. Finally, in my opinion the explanations provided by the detector (tested only on FF++) are too limited to prove that the method is working in the correct way. Experiments on other datasets are needed also because the method is trained on this same dataset. Hence under this respect generalization is not verified. | Summary:
The paper appears to be about a method for detecting manipulated facial images, specifically deepfakes. The authors have used a model that employs a joint embedding architecture, using ViT-B/32 as the visual encoder and GPT-2 as the text encoder. The model is trained using AdamW with a decoupled weight decay of 1 × 10−3 and an initial learning rate set to 1 × 10−7, which changes following a cosine annealing schedule. The authors compare their method with several state-of-the-art (SOTA) methods, including Face X-ray, PCL, MADD, LipForensics, RECCE, SBI, ICT, SLADD, and OST. The results show that their proposed method outperforms all the recent SOTA.
Strengths: 1. This paper is well written and easy to follow and will be of interest to researchers from the community of multitask learning in deepfake.
2. Finding the semantic dependencies among tasks using texture prompts is clear and places the previous work very well in context of this framework.
3. Finally, the authors provide experimental results to demonstrate the effectiveness of the proposed objective function and algorithms.
Weaknesses: 1. The majority of the contributions in this study are essentially modifications of existing work. Additionally, the significance of the main contribution appears to involve identifying similarities among previous work and proposing a comprehensive generalization that encompasses a significant portion of the existing research. While this contribution may enhance understanding, it seems to be primarily pedagogical in nature rather than being a novel research finding.
2. The complexity of the proposed method seems high (impractical). How effectively does it handle large datasets? Is it possible to use it in conjunction with sparse variational inference approaches?
3. It would be great if the authors could extend the proposed algorithm to adapt to other types of loss functions ( from eq.3 – eq.7) such as exp-concave and strongly convex functions.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. How was the drop calculated in Table 2 ?
2. Contrastive textual pairing is not defined well?
3. How did the minimize energy based model for joint embedding ?
4. What is the performance of the model when dealing with different convex loss functions?
5. How does the energy-based model function within the GPT-2 and CLIP-based embedding framework?
6. What are the challenges and potential solutions when applying your method to face images fully synthesized by GANs or diffusion models?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: As described in the manuscript, the proposed method may perform unsatisfactorily when encountering fake face images generated by diffusion-model-based methods.
Also, see the weaknesses above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. The majority of the contributions in this study are essentially modifications of existing work.**
**A1**: We respectfully disagree with the comment and kindly refer the reviewer to the general response. In short, the most significant contribution is defining a set of coarse-to-fine face forgery detection tasks based on face attributes at different semantic levels. This naturally leads to a multi-task learning setting, which is implemented by a joint embedding approach with several desirable properties regarding semantic encoding, automation, and explainability. The CLIP and the fidelity loss are our instantiations and can be changed to other plausible choices.
**Q2. The complexity of the proposed method seems high (impractical).**
**A2:** The reviewer may misunderstand the complexity of our method, which we would like to clarify. The main computational complexity arises from the computation of the image and text embeddings, and the computation complexity of the vision-language correspondence is negligible (i.e., the cosine similarity between image embedding and text embeddings). It is important to note that the text embeddings can be pre-computed only once and used throughout the training and testing procedures. Thus, the main computation complexity comes from extracting the image embedding, which is also required by all competing methods.
**Q3. Regarding the other types of (convex) loss functions.**
**A3:** This paper focuses on the formulation and implementation of multitask learning of face forgery detection at the semantic level, through a novel joint embedding approach. The selection of the best loss function is not our primary focus. Nevertheless, we choose the fidelity loss [R1] over the default cross-entropy loss for several reasons. First, it is capable of obtaining the real minimal loss for each desired probability. Unlike the cross-entropy loss, the fidelity loss has zero loss for each pair, which makes the trained model more accurate. Second, it is bounded between 0 and 1. If the loss has no appropriate upper bound, hard samples continuously placed in the wrong position could lead to excessive loss. In Table 3 of the main paper, we have ablation studies on popular loss functions used in visual tasks that are suitable for our formulation, including the cross-entropy loss, the probabilistic loss, as well as the fidelity loss, and show that the fidelity loss gives the best performance.
[R1] FRank: A ranking method with fidelity loss. In ACM SIGIR, pages 383–390, 2007.
**Q4. How was the drop calculated in Table 2?**
**A4**: The drop is calculated as
$$ \mathrm{Drop}= \frac{\mathrm{mAUC}_{\mathrm{perturb} }-\mathrm{AUC} _{\mathrm{clean} }}{\mathrm{AUC} _{\mathrm{clean} }} \times 100\% $$
where
$\mathrm{mAUC}_{\mathrm{perturb} }$ is the average of performance on all the perturbations, denoted as Mean AUC in the table, and $\mathrm{AUC} _{\mathrm{clean} }$ is the clean AUC.
**Q5. Contrastive textual pairing is not defined well?**
**A5**: The motivation of the proposed contrastive textual pairing is to encourage the model to learn a more accurate correlation between the visual and textual embeddings via contrastive learning. Specifically, given a fake face with the modified eye(s), the goal is to maximize the similarity between the image and the corresponding textual embedding (i.e., “A photo of a face with the local attribute of {eye} altered” ) and minimize the opposite textual embedding simultaneously. Empirically, we find contrastive textual paring to facilitate model optimization and boost performance, as shown in the following table. As for more design details, we refer the reviewer to Section 3.2 in the main paper.
| Model Variant|CDF|FSh|DF-1.0|DFDC|Mean AUC|
|-|-|-|-|-|-|
|Ours (Default)|**89.02**|**98.68**|**93.38**|**82.06**|**90.79**|
|w/o contrastive textual pairing|87.89|98.34|93.30|81.27|90.20|
**Q6. How did the minimize energy-based model for joint embedding?**
**A6**: The goal of joint embedding is to maximize the vision-language correspondence, while the energy-based model is to minimize the energy of some physical or computational system. In our case, we transfer the vision-language correspondence into the similarity probability, fed into a physically inspired function - the fidelity loss. The fidelity loss for compatible visual and textual embeddings will be low (corresponding to low energy), while incompatible embeddings will lead to a larger loss (corresponding to high energy). Thus, the optimization is consistent with the goal of the energy-based model. As for more design details, we refer the reviewer to lines 162-187 in the main paper.
**Q7. How does the energy-based model function within the GPT-2 and CLIP-based embedding framework?**
**A7**: In the CLIP model, its text encoder adopts GPT-2 with a base size of 63M-parameter. So, GPT-2 is within the CLIP model. The energy model is built on the similarities (e.g., cosine similarity) between image and textual embeddings. During training, we maximize the similarities between compatible image and textual embeddings and minimize the similarities between incompatible embeddings, which corresponds exactly to energy minimization in machine learning.
**Q8. What are the challenges and potential solutions when applying your method to face images fully synthesized by GANs or diffusion models?**
**A8**: Our current model is mainly trained on the forged faces with blending operations [1, 2, 18, 31, 40, 44, 63, 71, 72], which aims to make the generated face more realistic by alleviating the effect on the authentic region. Thus, it may not perform very well when directly applying it to fully synthesized faces because they do not contain a blending operation. A simple solution is to train our model with the face images fully synthesized by GANs or diffusion models because we do not rely strongly on blending or contrastive features within the fake face image.
---
Rebuttal Comment 1.1:
Title: Follow-up on Review Feedback
Comment: We greatly appreciate the time and effort you've invested in reviewing our work and providing constructive feedback. We are following up to check whether our responses have addressed your comments and concerns.
Thank you,
Authors | Summary: This work proposes an automated multitask learning framework for face forgery detection from a joint embedding perspective. The central idea is to utilize the multi-modality of visual and textural features to enhance blending-based face forgery detection with the global and local semantic face attributes. Experiments demonstrate the effectiveness of this proposed framework.
Strengths: The new paradigm of multitask learning strategy from a joint embedding perspective is introduced into the face forgery detection field. The work trains two encoders to jointly embed visual face images and textual descriptions in the shared feature space. Thus, one can guide the forgery detection that is mainly based on visual content with textural descriptions. This work successfully explored the feasibility of using multi-modality data with a multi-task learning framework. Extensive results on the ablation studies verified the effectiveness of the proposed framework.
Weaknesses: The majority of technical components of this work are borrowed from existing works, e.g., multitask learning, embedding space representation (latent space), textural space and etc. The technical contributions that inspire the following research are quite limited.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1) The authors suggest using a textual template, where the critical description such as real/fake are instantiated as needed. It is no doubt about the effectiveness of this treatment. However, does it really necessary to handle the textural description using templates? What if directly using the attributes (e.g., real or fake) rather than a textural template (one can see that the differences between the textural descriptions on real images and fake images are only the keyword “real” and “fake”.)
2) For Section 4.3 Robustness Analysis, the considered distortion of perturbations is necessary. However, more distortions such as image/video compression (JPEG or HEVC) are missing. In practical scenarios, the forged images are often communicated via online social networks, which are typically applied compression.
3) The authors stated that the proposed method can only be applied to the scenarios where the forged faces are generated with blending operations. What if the blended image is further processed with some other harmonization techniques (e.g., with illumination correction on the faces or the edges between the beguine and fake regions)
4) The adopted FF++ dataset contains video data for training, validation, and testing. As a well-known fact, one critical information of the faked video is temporal information. However, it seems that the authors neglect the temporal information directly.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors clearly stated the limitation of this work. The proposed method cannot be applied to totally generated AI-generated images such as GAN-generated or diffusion-based model generated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. The technical contributions that inspire the following research are quite limited: The majority of technical components of this work are borrowed from existing works, e.g., multitask learning, embedding space representation (latent space), textural space and etc.**
**A1**: We respectfully disagree with the comment and kindly refer the reviewer to the general response. In short, the most significant contribution is defining a set of coarse-to-fine face forgery detection tasks based on face attributes at different semantic levels. This naturally leads to a multi-task learning setting, which is implemented by a joint embedding approach with several desirable properties regarding semantic encoding, automation, and explainability. The CLIP and the fidelity loss are our instantiations and can be changed to other plausible choices.
**Q2. The authors suggest using a textual template, where the critical description such as real/fake are instantiated as needed. It is no doubt about the effectiveness of this treatment.
However, does it really necessary to handle the textural description using templates? What if directly using the attributes (e.g., real or fake) rather than a textural template
(one can see that the differences between the textural descriptions on real images and fake images are only the keyword “real” and “fake”.)**
**A2:** As suggested by the reviewer, we conduct additional experiments by 1) simplifying the proposed textual templates to two keywords, real/fake, for textual encoding, 2) representing the two keywords, real/fake, with one-hot labels, followed by MLP encoding, and 3) no text encoder at all (i.e., a ViT-based traditional discriminative architecture that predicts multiple target outputs directly from the input face image). The experimental results in the following table verify the effectiveness of our textual templates.
| Method | CDF | FSh | DF-1.0 | DFDC | Mean AUC |
| ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |
| Real/fake textual encoding | 85.91 | 98.45 | 93.34 | 80.44 | 89.54 |
| One-hot MLP encoding | 84.05 | 98.13 | 92.29 | 79.48 | 88.49 |
| No text encoder | 76.25 | 87.37 | 83.24 | 72.89 | 79.94 |
| Ours | **89.02** | **98.68** | **93.38**| **82.06** | **90.79** |
**Q3. For Section 4.3 Robustness Analysis, the considered distortion of perturbations is necessary. However, more distortions such as image/video compression (JPEG or HEVC) are missing. In practical scenarios, the forged images are often communicated via online social networks,
which are typically applied compression.**
**A3:** Thanks for the excellent suggestion. We conduct additional experiments to probe the robustness of the competing detectors to JPEG compression, as shown in the following table. Mean AUC indicates the averaged performance across all perturbations, i.e., Patch-Sub, Noise, Blur, Pixelation, and JPEG Compression. It is clear that the proposed method is capable of maintaining high performance against the distortion of JPEG compression.
| Method | Clean AUC | JPEG Compression | Mean AUC | Drop |
| ------------ | ------------ | ------------ | ------------ | ------------ |
| Face X-ray | 98.37 | 81.03 | 82.24 | -16.40% |
| CNND | 99.56 | 98.34 | 86.90 | -12.72% |
| LipForensics | 99.90 | 94.64 | 91.30 | -8.61% |
| Ours | 98.49 | 91.91 | 90.08 | **-8.53%** |
The Drop is calculated as follows,
$$ \mathrm{Drop} = \frac{\mathrm{mAUC}_{\mathrm{perturb} }-\mathrm{AUC} _{\mathrm{clean} }}{\mathrm{AUC} _{\mathrm{clean} }} \times 100\% $$
where
$\mathrm{mAUC}_{\mathrm{perturb} }$ is the average of performance on all the perturbations, and $\mathrm{AUC} _{\mathrm{clean} }$ is the clean AUC.
**Q4. The authors stated that the proposed method can only be applied to the scenarios where the forged faces are generated with blending operations. What if the blended image is further processed with some other harmonization techniques (e.g., with illumination correction on the faces or the edges between the beguine and fake regions)**
**A4:** The reviewer may misunderstand the blending operation in our face forgery pipeline, which we would like to clarify. After the blending operation, the blended image always undergoes a harmonization process (except for images with inconsistent illumination, for which we do not apply harmonization). Therefore, the proposed model can deal with the scenarios where the blended images are further processed with harmonization techniques. We would like to refer the reviewer to Section 2 and Table 1 in Appendix for more details.
As pointed out by the reviewer, our model is mainly trained on the forged faces with blending operations, which may not perform well when directly applying it to fully synthesized faces. This problem can be addressed by adding some fully synthesized faces during training.
**Q5. The adopted FF++ dataset contains video data for training, validation, and testing. As a well-known fact, one critical information of the faked video is temporal information. However, it seems that the authors neglect the temporal information directly.**
**A5**: In this paper, we focus on image-based DeepFake detection rather than video-based. Therefore, following many recent methods [11, 19, 66, 81], we do not consider temporal information for a fair comparison.
---
Rebuttal Comment 1.1:
Comment: Most of my concerns were well addressed, and I would like to upgrade the evaluation to borderline accept. | Rebuttal 1:
Rebuttal: ### **A general response regarding the contributions of our work**
We thank all reviewers for the detailed and constructive comments. We are glad to find that most reviewers generally acknowledge the following contributions of our work.
This paper explores multitask learning of face forgery detection from a joint embedding perspective, aiming to improve generalizability and explainability.
As highlighted by ***Reviewer VfHM***, it is valuable to design a detector that can provide explanations about the manipulations;
As highlighted by ***Reviewer NnkA*** and ***fx6c***, this paper is a pioneering effort in employing language prompts or multimodal data to address the challenge of face forgery detection, which shed light on multimodal approaches for face forgery detection;
As highlighted by ***Reviewer jNdm***, this paper is easy to follow and will be of interest to researchers from the community of multitask learning in DeepFake.
We would like to emphasize that our approach is not a simple combination of existing techniques but with solid motivations and justifications.
$\underline{\text{First}}$, unlike most existing methods, we prefer to tackle face forgery detection at the semantic level rather than at the signal level. To achieve this, we have defined a set of coarse-to-fine face forgery detection tasks based on face attributes at different semantic levels. This naturally leads to a multitask learning formulation of face forgery detection.
$\underline{\text{Second}}$, the prevailing multitask learning paradigm for face forgery detection takes a discriminative approach, i.e., predicting multiple target outputs (one for each task) directly from the input face image. Such a paradigm suffers from two main drawbacks.
**1)** It overlooks semantic relationships across tasks, which weakens knowledge transfer. For example, irrelevant information (e.g., every detail of the face image in face reconstruction [8]) may be transferred across tasks.
**2)** It requires extensive human expertise to determine task-agnostic/task-specific model parameters and the weights of different task losses as two forms of hyperparameters.
As a significant departure, we propose to formulate multitask learning using a novel joint embedding paradigm. This paradigm is capable of directly transferring the recent advances in multimodal learning (in particular, text + image), which 1) supports encoding the semantic closeness between tasks in the latent feature space, 2) enables automated multitask learning in terms of allocating model capacity (i.e., specifying task-agnostic and task-specific model parameters) and 3) provides textual explanations. All these have not been accomplished by previous methods. In addition, this paradigm takes initial steps and sheds light on face forgery detection using multimodal information. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces a joint embedding approach for multitask learning in face forgery detection. The method defines a set of coarse-to-fine face forgery detection tasks based on face attributes at different semantic levels, and describes the ground truth for each task via a textual template. CLIP is used to implement the joint embedding architecture, and multi-level fidelity losses are used for multitask learning. The proposed method outperforms state-of-the-art detectors in terms of generalization ability.
Strengths: 1. This paper proposes a joint-embedding-based multitask learning method for face forgery detection. It could probably be the first work to apply the language prompts on the task of face forgery detection.
2. This paper defines a set of coarse-to-fine face forgery detection tasks based on face attributes at different semantic levels to facilitate the multitask learning.
3. The proposed method achieves better performance than the SOTA schemes in terms of generalization ability.
Weaknesses: 1. The authors apply the existing technologies including CLIP and fidelity loss for joint-embedding-based multitask learning. The technical contribution is rather limited.
2. It lacks of explanation of why the authors use CLIP for joint learning. For the same token, it also lacks of analysis regarding the use of fidelity loss for multi-task learning.
3. The works for comparing the robustness do not include the SOTA schemes.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. In Table 1, the AUC of OST on DFDC is lower than that is reported in reference [11] (77.73% vs 83.30%), while the AUC of OST on CDF is exactly the same as that is reported in [11]. The authors may want to explain why.
2. The authors may want to explain the purpose of reporting the performance of w/o Aug in Table 2 and 3.
3. In section 1, lines 71-72, “textural templates” should be “textual templates”.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Please refer to weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Regarding the limited technical contribution. The authors apply the existing technologies, including CLIP and fidelity loss for joint-embedding-based multitask learning.**
**A1**: Please refer to the general response for technical contributions. In short, the most significant contribution is defining a set of coarse-to-fine face forgery detection tasks based on face attributes at different semantic levels. This naturally leads to a multi-task learning setting, which is implemented by a joint embedding approach with several desirable properties regarding semantic encoding, automation, and explainability. The CLIP and the fidelity loss are our instantiations and can be changed to other plausible choices.
**Q2. It lacks of explanation of why the authors use CLIP for joint learning. For the same token, it also lacks of analysis regarding the use of fidelity loss for multi-task learning.**
**A2**: As CLIP is a simple yet prevalent vision-language model, we use CLIP to compute text/image embeddings. A significant advantage of encoding ground-truth labels via textual prompts is that it gives us a great opportunity to leverage the semantic dependencies among tasks in the representation space. It is also possible to explore embeddings from other vision-language models like UniCL, LiT, GroupViT, HiCLIP, etc.
For the fidelity loss, it has the following advantages over the cross-entropy loss. 1) It is capable of obtaining the real minimal loss for each desired probability. Unlike the cross-entropy loss, the fidelity loss has zero loss for each pair, which makes the trained model more accurate. 2) It is bounded between 0 and 1. If the loss has no appropriate upper bound, hard samples continuously placed in the wrong position could lead to excessive loss. This can bias the model and degrade its performance. We also experimentally demonstrate the superiority in our ablation studies.
**Q3. The works for comparing the robustness do not include the SOTA schemes.**
**A3**: The primary goal of this paper is to improve model generalizability rather than robustness. Thus, our experimental setups are mainly designed for a fair comparison and testing of model generalizability. In the robustness testing, we include several SOTA schemes of Face X-ray, CNND, and Lip-forensics because the former two also use data augmentation during training, and the latter relies on high-level semantic features with intrinsic robustness to low-level manipulations. As suggested by the reviewer, we will incorporate other SOTA schemes for the robustness comparison.
**Q4. In Table 1, the AUC of OST on DFDC is lower than that is reported in reference [11] (77.73% vs 83.30%), while the AUC of OST on CDF is exactly the same as that is reported in [11]. The authors may want to explain why.**
**A4**: There are two tables in OST; one (Table 1 in OST) is for generalizability comparison, and the other (Table 2 in OST) is for comparison with models based on meta-learning.
The AUC of OST on DFDC in this paper is the average result (the calculation is in line with that in the OST paper) of what is reported in Table 1 of the OST paper, while the AUC of OST on CDF is directly copied from the result in Table 2 of OST
because there are no other results reported in the original OST paper. As suggested by the reviewer, we report the AUC of OST on DFDC based on Table 2 in the OST paper, as follows. From the table, the proposed method still outperforms OST in terms of five face forgery datasets by a clear margin.
| Method | FF++ | CDF | FSh | DF-1.0 | DFDC | Mean AUC |
| ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |
| OST | 98.20 | 74.80 | -- | 93.08 | **83.30** | 87.34/83.73 |
| Ours | **98.49** | **89.02** | 98.68 | **93.38** | 82.06 | **92.33**/**90.79** |
**Q5. The authors may want to explain the purpose of reporting the performance of w/o Aug in Table 2 and 3**.
**A5:** Existing face forgery detection methods tend to use very different data augmentation strategies for performance boosting. We report the performance of the proposed model w/o data augmentation in Tables 2 and 3 with the goal of singling out the core contribution of our approach: multitask learning of face forgery detection via joint embedding.
**Q6. A typo: In section 1, lines 71-72, “textural templates” should be “textual templates”.**
**A6**: Thanks for pointing out this typo; we will revise it and proofread the whole manuscript.
---
Rebuttal Comment 1.1:
Comment: I maintain my initial rating after reading the response and other reviewers' comments. | null | null | null | null | null | null |
A Bayesian Approach To Analysing Training Data Attribution In Deep Learning | Accept (poster) | Summary: The paper studies the challenges in measuring the performance of training data attribution (TDA) methods arising from stochasticity in training large deep neural networks. Specifically, the authors use various existing approaches to obtain many samples from the posteriors of the model weights instead of a point estimate with and without a training instance whose influence needs to be computed. The authors then compute the mean and variance of ground truth attribution/influence scores; and measure the performance of various attribution methods by looking at the correlation of these mean and variance with the ground truth scores.
Strengths: - Thorough experiments comparing various Leave-One-Out (LOO) or attribution approximation methods including influence scores and variants (GD, GS), additional training step (ATS).
- The paper provides further empirical evidence to previous observations [1] that common approximations used for LOO actually estimate slightly different objects and are susceptible to randomness in training arising from initialization of model weights, batch order, etc.
Weaknesses: - It is not clear if the phrase Bayesian perspective on TDA is useful, authors could say more on the connection between using Bayesian DL methods, Student t-test for measuring the noise in TDA estimates and Bayesian perspective on TDA.
- In eq 8, how are the T samples from the two posteriors (with and without a training instance) paired.
- It is also not clear if the observations made here only apply to measuring the attribution performance wrt to the ground truth attributions. That is, the main recommendation to focus on low signal to noise pairs seemly to apply only to evaluation. Did the authors perform such an evaluation? How does this recommendation affect the use of influence scores downstream applications such as correcting label mistakes, removing biased instances, etc.
[1] Bae, Juhan, Nathan Ng, Alston Lo, Marzyeh Ghassemi, and Roger B. Grosse. "If Influence Functions are the Answer, Then What is the Question?." Advances in Neural Information Processing Systems 35 (2022): 17953-17967.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and questions. We hope to clarify concerns in the following.
### It is not clear if the phrase Bayesian perspective on TDA is useful, authors could say more on the connection between using Bayesian DL methods, Student t-test for measuring the noise in TDA estimates and Bayesian perspective on TDA.
TDA estimates are usually treated as deterministic point estimates, even though model training is in fact probabilistic (SGD and model initialisation are probabilistic). Bayesian ML is a framework for modelling the uncertainty in model training (cf. Section 2.2 of the submission and [a]). In our work, we borrow methods from Bayesian ML [b,c] to turn TDA estimates into random variables. The random variables are in turn studied through the proposed statistical testing. By treating TDA estimates as a probabilistic distribution, we can examine the statistical significance of TDA estimates and quantify the variance which we found to be inherent in the TDA task.
### In eq 8, how are the T samples from the two posteriors (with and without a training instance) paired.
We don't pair the samples for the two posteriors, as you can see in Eq (9):
$$
\text{Var} [\tau (z_j, z)] = \frac{1}{T^2}\sum_{t,t^\prime} \left(\mathcal{L}(z ;\theta^{(t)}_{\setminus j}) - \mathcal{L}(z ;\theta^{(t^\prime)})-\mathbb{E} [\tau (z_j, z)]\right)^2
$$
The reason we use common index $t$ for the two distributions in Eq (8) is because it results in the identical output due to the linearity:
$$
\mathbb{E} [\tau (z_j, z)] = \frac{1}{T}\sum_t \mathcal{L}(z ;\theta^{(t)}\_{\setminus j}) - \frac{1}{T}\sum\_{t^\prime} \mathcal{L}(z;\theta^{(t^\prime)}) = \frac{1}{T}\sum_t \mathcal{L}(z ;\theta^{(t)}_{\setminus j}) - \mathcal{L}(z ;\theta^{(t)})
$$
We hope this brought some clarity and are happy to discuss any follow-up questions further.
### It is also not clear if the observations made here only apply to measuring the attribution performance wrt to the ground truth attributions. That is, the main recommendation to focus on low signal to noise pairs seemly to apply only to evaluation. Did the authors perform such an evaluation?
No, we did not make an analysis of the set of low-noise pairs in the submission but it is an interesting point. The Spearman rank correlation matrices of the means, standard deviations and p-values for the set of low-noise train-test pairs (LOO p-values <0.05) are provided in the global response PDF for two experiments (MNIST3 $|\mathcal{D}|=150$ and CIFAR10 $|\mathcal{D}|=500$ with CNN).
We can see from this analysis:
The approximate TDA methods do not correlate strongly with LOO.
In this subset, p-values correlate positively among approximate methods, underlining observations from Section 4.4 (Dis)Agreement of TDA methods: TDA estimation methods discover similar attribution including the stochastic noise.
In this subset, means of TDA methods correlate stronger with LOO than for all train-test pairs (e.g. for MNIST3: ATS: 0.01 -> 0.09, IF: 0.05 ->0.10, GD: -0.05 -> -0.11, GC: 0.07 -> 0.09 (Fig 5, lower left matrix -> Rebuttal PDF file, upper left)
This evaluation shows that none of the tested TDA methods approximate the ranking of the true change in loss, even if this change is stable wrt training process stochasticity. In the case of IF, the possible reason could lie in observations made by [d]: IFs do not correspond to pure LOO retraining, approximation gaps and solver errors lead to a different objective. A study regarding how the TDA estimations methods correspond to [d]’s objective would be interesting in the future.
### How does this recommendation affect the use of influence scores downstream applications such as correcting label mistakes, removing biased instances, etc.
The question of how variance in TDA estimates may affect downstream tasks is intriguing. We ran an additional experiment for the downstream task of mislabel identification similar to [e] and [f] with MNIST3. We find that the inherent stochasticity of TDA leads to a large range in mislabel identification performance. High variance in the TDA estimates degrades downstream task performance. Further details in the global response, experiment 2.
[a] Bayesian methods in global optimization (1991). \
[b] If Influence Functions are the Answer, Then What is the Question? (NeurIPS 2022).\
[c] Simple and scalable predictive uncertainty estimation using deep ensembles (NeurIPS 2017).\
[d] Averaging weights leads to wider optima and better generalization (UAI 2018).\
[e] Understanding black-box predictions via influence functions (ICML 2017).\
[f] Estimating training data influence by tracing gradient descent (NeurIPS 2020).
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I would like to thank the authors for additional experiments and responding to my questions. | Summary: This paper aims to examine training data attribution (TDA) methods from a Bayesian perspective, assuming the learned model parameters are samples from a posterior distribution. The paper illustrates how this perspective might affect TDA methods. It conducts experiments comparing and contrasting different TDA methods when run several times on model parameter samples from an approximate posterior. Experiments consider subsampled versions of MNIST and CIFAR10, and examine two model classes CNN and ViT+LoRA.
Strengths: The topic is very interesting.
As a researcher and practitioner that has worked extensively with TDA methods, I have found myself thinking about (and navigating) the relationship between randomness in model parameters and TDA methods.
I think the community stands to benefit from this line of investigation.
The paper communicates many of its ideas quite clearly. Figures 1 and 2 are especially helpful.
At a high level, the experimental setup is a sensible approach.
Anonymized source code is included.
Weaknesses: The paper is lacking in its theoretical treatment of the subject it aims to explore (effectively limited to Figure 1). This reduces its value to the community. Conclusions are drawn based on experiments conducted via (relatively small) samples from a few simple approximations of the posterior. The paper would benefit considerably from some mathematical analysis and discussion of how a Bayesian model or perspective might affect the validity of TDA methods, such as influence functions. For example, how does treating $\theta$ as a random variable affect the application of the implicit function theorem used to derive influence? The Hessian of the loss is a function of $\theta$, for some set of values in the support of $\theta$ this Hessian is not invertible. Does this set have zero measure? Can it be safely ignored? It seems that there are numerous details that go unexamined.
Overall the writing is quite clear. But crucially, I found the presentation of the test statistic (Equation 11) to be confusing. As such it was difficult to interpret (and thus to review) the subsequent experimental results. (See questions below. I continue the review assuming that the p-values quantify the probability that the samples of $\tau$ could be generated by a random variable having mean equal to 0.)
At a high level, the experimental setup makes sense to me. However, some of the analytical steps in processing the collected data seem weak. For instance, on line 160 it is stated that “[this] treatment [...] poses a novel challenge for evaluation”. Yet if I understood correctly, the experiments yield samples from random variables that are compared in pairs, $\tau$ and $\tau’$. There are numerous established ways to compare two sample-sets, and to quantify the probability that they come from the same underlying distribution. A two sample t-test may be an option.
Without much theoretical analysis to rest on, the experiments on two small datasets and two models struggle to convince me of the general claims. For example, the experiments on training set size (Section involve only three size settings, and don’t show clear trends. While the discussion on model complexity only considers two models which differ in both size and architecture.
Influence functions have been successfully used to reshape model behavior through the removal of identified sets of training instances. Several of the works cited in this paper achieve this. Therefore, the IF approximation must correlate with LOO at least on average for some types of training instances. How can this be reconciled with the results (e.g. in Figure 5)? This is not really discussed. Can they be explained via low noise pairs?
Minor:
Line 81: While Koh & Liang showed that IFs could be applied to neural networks, they did not derive the method. I suggest citing the original robust statistics papers on IF and/or the infinitesimal jackknife.
Figures 3 and 4 would benefit from larger labels.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Regarding the presentation of the statistical test:
In Equation 7 $\tau$ is redefined as a random variable. The randomness stems from $\theta$ conditioned on the observed dataset and the removal of j. (It might be helpful to indicate that with the notation.)
In Equation 10, it states that the hypotheses being tested are whether \tau does or does not equal 0. This is unclear. I assume these hypotheses refer to $\tau$’s mean, and that the t-test is intended to determine the probability that the samples of $\tau$ could be generated by a random variable having mean equal to 0. Is this correct?
In Equation 11, what version of the T-test are you using? It would be helpful to cite your statistical method. Is the test designed to consider the pairing of the posterior (and perturbed posterior) samples?
In Equation 11, I assume that t should be indexed by j (and is also a function of the test sample z), but it’s unclear whether $t$ is computed for every pair of posterior (and perturbed posterior) samples. The left-most term of the numerator suggests this. If so, how are you aggregating these sets of values to get a single value per train-test pair?
Other:
Is it common to combine these approximate posterior sampling techniques? What are the implications of doing so?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The experiments in this paper consider two models. The paper notes limitations in the size of the datasets considered, explaining that it opted to spend computation time/budget on an exhaustive analysis of TDA values instead. However, if the objective was to comment on issues with TDA in practice, this choice (and limitation) is considerable. The results would be more convincing if the experiments had focused on only a subset of the test-train pairs, but considered larger (more realistic) dataset sizes, and a broader set of model architectures and/or sizes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review and constructive suggestions.
### The paper [...] would benefit [...] from some mathematical analysis [...] of how a Bayesian [...] perspective might affect the validity of TDA.
Our probabilistic conversion of TDA methods does not affect theoretical soundness. We apply the original TDA algorithms on the Monte-Carlo samples of model posteriors without modification. Since the methods are identical, it is unclear which additional theoretical treatment would add value. We are happy to discuss should there be further requests.
### Conclusions are drawn based on experiments [...] from a few simple approximations of the posterior.
The simplicity of posteriors is the driving force behind the successful application of Bayesian ML on complex deep learning models [a,b,c,d]. While simple posteriors like Deep Ensemble (DE) [11] and Stochastic Weight Averging (SWA) [12] do not restore the posterior perfectly, they are sufficient to support our main message: TDA is inherently stochastic. Also, we highlight that we use T=50 posterior samples. This is not a small scale in both Bayesian ML and TDA communities. DE [11] and SWA [21] used T=15 and 30, resp. Previous work on TDA has used only a few deterministic models (T=1 [e], 3 [f]).
### How does treating θ as a random variable affect the [...] implicit function theorem [(IFT)]?
Even in the deterministic setup, the IFT and inverse Hessians are generally inapplicable due to the non-convexity and high dimensionality of the deep learning optimisation problem. They are not novel challenges introduced by our probabilistic treatment. This is why Koh & Liang have proposed in [3] Section 4.2 (Non-convexity and non-convergence) to add a damping term λ to ensure the positive definiteness (PD) of Hessians. We used λ=3e-5 (as in [f]) and this guarantees PD Hessians at all posterior samples.
### Analytical steps [...] seem weak. [...] There are [many] established ways to compare two sample-sets
Yes, there are many established ways to compare two random variables. However, we address a more complex problem. We need the rank correlation between two sets of random variables: $(\tau_1, …, \tau_N)$ vs $(\tau_1’, …, \tau_N’)$. $\tau_i$ refers to the TDA value of the $i^\text{th}$ train-test sample pair and $\tau_i’$ refers to the estimated TDA value.
We are not aware of an existing tool for this problem. We have proposed to measure the Spearman’s ρ between the means and variances separately (Sec. 3), with the intuition that $\tau’$ must also faithfully rank the train-test pairs according to the level of noise.
### The experiments on two small datasets and two models struggle to convince me of the general claims.
We agree that testing on larger datasets makes our claim stronger. We were limited by computational resources (e.g. LOO for CIFAR10 of size 500 requires ~104 GPU hours). Nonetheless, the experiments support our general claim: TDA is stochastic and evaluation protocols must reflect this. Besides, TDA tends to be more stable with smaller models. We expect that noise will dominate the signal in larger settings, making the analysis less meaningful.
### [Training set size experiments] involve only three size settings, and don’t show clear trends
We ran additional experiments further varying the training set size. The results show that the variance in TDA estimates remains high after certain training set size. Exp. 3 in global response.
### [...] discussion on model complexity only considers two models which differ in both size and architecture.
To address the reviewer’s concern that our models are too different and few, we ran an experiment with a 3-layer CNN that matches ViT+LoRA in #trainable parameters. The results confirm that increasing model complexity decreases the statistical significance of TDA estimates. p-values for LOO with MNIST3: 0.331 (2-layer CNN) → 0.370 (3-layer CNN) → 0.786 (ViT+LoRA). Exp. 4 in global response.
### How can [successful work with IFs] be reconciled with the results?
We share the reviewer’s intuition that successful related work (e.g. [3,15]) could be explained via low-noise pairs. Due to the prohibitive cost of LOO, previous work relied on a small number of train-test pairs (100 [3], 500 [15]). We internally attempted to replicate successful TDA results [3,15] but found that the impact of training stochasticity generally dominates the impact of a single training sample. This observation motivated our work.
### Eq. 10[...] I assume these hypotheses refer to τ’s mean [...]. Is this correct?
The reviewer is correct. The hypothesis should read μ=0 where μ is the mean of the distribution τ(zj,z). We will correct this, thanks!
### Eq. 11 what version of the T-test are you using?[...] Is the test designed to consider the pairing of the posterior (and perturbed posterior) samples?
We use the (unpaired) Student’s t-test [a] to test the statistical significance of μ>0, where μ is the mean of TDA estimates. In Eq. (9), we do not pair the indices $t$ and $t^\prime$ for the two posteriors.
### Is it common to combine these posterior sampling techniques?
Yes, e.g. [b] uses both DE and SWA to parametrise the posterior. This implies a mixture-of-Gaussian posterior family, where the DE models the centroids and SWA models each Gaussian component.
### The results would be more convincing if the experiments had focused on only a subset of the test-train pairs, but considered larger [...] dataset sizes.
We ran an experiment with a CNN on MNIST to study TDA reliability on a set of 1000 train-test pairs. The results are in line with our submission: TDA is inherently stochastic and TDA methods fail to capture the variance in LOO. Exp.1 in global response.
### Other
We will add [c], increase label sizes in Fig. 3 and 4, and include the dataset condition in Eq. 7.\
[a] Student (1908)\
[b] Wilson&Izmailov (NeurIPS 2020)\
[c] Hampel (1974)
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors taking the time to write a detailed rebuttal and to provide additional experiments. However, many of my concerns remain unaddressed, e.g.,
"it is unclear which additional theoretical treatment would add value": I struggle with this response. As a researcher and practitioner who regularly uses these methods, I am saying that the paper hasn’t provided me with much insight beyond what I have already observed empirically or read in related works. I therefore question the extent of its value to the community. I agree that in practice TDA methods show a great deal of stochasticity. At least when applied to non-convex models optimized via SGD. Perhaps, what I’m struggling most with here is the framing. If the paper was simply positioned as saying: “User be warned, most TDA values in neural networks are dominated by retraining noise”, then I would be less critical. (Albeit variations of this point have been made before). But the paper is positioned as a “Bayesian perspective” on TDA, and as such I was expecting the subject to be investigated more theoretically, and I was hoping it might present some insights into the observed stochastic behaviors of these methods, e.g., provide a theoretical explanation for the training data points which are consistently high influence for a test point (what you call low-noise pairs). Moreover, what if the reader were interested in a simpler model class, like linear regression or logistic regression? These are important use cases for TDA methods. But influence functions are not “inherently stochastic” in linear regression (nor in logistic regression–assuming sufficient optimization). The practitioner would need to consider the application of the TDA methods to the Bayesian version of these methods, which aim to explicitly model uncertainty in the parameters. This is left unaddressed, e.g., how would one apply IF to simple Bayesian models (ones where parameter uncertainty is explicitly represented)?
“Even in the deterministic setup, the IFT and inverse Hessians are generally inapplicable due to the non-convexity and high dimensionality of the deep learning optimization problem. They are not novel challenges introduced by our probabilistic treatment.” I agree that some TDAs already suffer theoretical issues when applied to neural networks. However, this is not a reason to avoid explicitly naming them and discussing their implications. In practice, these shortcoming are often circumnavigated by either continuing to train from an optimized checkpoint, or by keeping all but the last layer parameters frozen (e.g., Koh and Liang), effectively leading to a logistic regression model over a fixed feature extractor. Neither of these approaches are examined or discussed.
“We add a damping term λ to ensure the positive definiteness (PD) of Hessian”: This damping term is an example of something that may be interesting to examine theoretically, could it be connected to the prior in a Bayesian learning framework?
“we ran an experiment with a 3-layer CNN that matches ViT+LoRA in #trainable parameters. The results confirm that increasing model complexity decreases the statistical significance of TDA estimates ” : There are still only 3 observations here. When the dataset size experiments were increased from 3 observations to 6 observations the trend changed. What makes the authors believe that this claim about model complexity is more robust?
I still struggle with the setup of the main statistical test. As far as I can tell, Equation 11 does not correspond to an unpaired, 1 sample student’s t-test. Assuming the p-values were nonetheless correctly calculated, it’s still unclear whether this test is appropriate given the mixed nature of the posterior samples. The samples are not IID. The “seed” samples from DE are arguably IID, but the subsequent samples from SWA are not. This should at least be discussed.
As such I stand by my original score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the follow-up comments and suggestions that will improve the quality of our paper.
> I am saying that the paper hasn’t provided me with much insight beyond what I have already observed empirically or read in related works [...] Perhaps, what I’m struggling most with here is the framing. [T]he paper is positioned as a “Bayesian perspective” on TDA, and as such I was expecting the subject to be investigated more theoretically.
Thank you for the clarification and we are glad to confirm that the reviewer shares a similar experience regarding the stochasticity of TDA. We fully agree that our framing is broader than our main focus and this is potentially a critical issue. We propose to address this issue by changing the title to delineate the scope more precisely and convey the message more concretely: “A Bayesian Approach to Analysing Training Data Attribution in Deep Learning”. We will also update the abstract and introduction to make clear that our scope is the application of TDA on deep learning models.
We would like to clarify further that we do not aim for a theoretical contribution. We study how the stochasticity in the training process of deep models fluctuates the TDA estimates. We apply prominent Bayesian deep learning approaches like Deep Ensemble and Stochastic Weight Averaging to model the randomness in the deep learning training procedure. We find that training process stochasticity has a considerable effect on TDA, as we observe significant variances in the scores. We further find that the tested approximation methods fail to fully capture the ground-truth variance. Based on this observation, we recommend evaluating the approximation methods for not only replicating the mean leave-one-out (LOO) measure but also the variance therein. We do such an evaluation for not only influence function (IF) but also other approximate TDA techniques like additional training step (ATS), grad-dot (GD) or TracIn, and grad-cos (GC).
> I was hoping it might present some insights into [...] e.g., provide a theoretical explanation for the training data points which are consistently high influence for a test point [and] how would one apply IF to simple Bayesian models [...]. Moreover, what if the reader were interested in a simpler model class, like linear regression or logistic regression?
We agree that probing the link between sample traits and TDA noise is important. However, even for simple model classes, characterizing TDA value distribution is inherently complex.
For example, even if we assume a simple Gaussian posterior for the learned parameter $p(\theta| X)=\mathcal{N}(\mu,\Sigma)$, the resulting IF value for logistic regression corresponds to
$$ IF(z_\text{test},z) -y_\text{test} y \cdot \sigma(-y_\text{test}\theta^\top x_\text{test})\cdot\sigma(-y\theta^\top x)\cdot x_\text{test}^\top H_\sigma^{-1} x$$ (Koh & Liang 2017).
where $\sigma(\cdot)$ is the sigmoid function. Note that the product of the sigmoids of Gaussian random variables does not yield a tractable distribution that can be represented with a closed-form mean and variance formulae.
Likewise, linear regression with an $\ell^2$-loss results in the following formula for IF:
$$ IF(z_\text{test},z) = - (x_\text{test}^T H^{-1} x) (x_\text{test} \cdot \theta - y) (x\cdot \theta - y) $$
Here, again, the product of two Gaussian random variables does not yield a distribution with tractable mean and variance, making it rather non-trivial to analyse the theoretical behaviour.
> influence functions are not “inherently stochastic” in linear regression (nor in logistic regression–assuming sufficient optimization)
We agree that we need to be careful with this statement and that influence functions are not “inherently stochastic”. We apologise. In the paper, we will explicitly restrict our scope to deep learning models.
> However, this is not a reason to avoid explicitly naming them and discussing their implications.
Yes, agreed. We will discuss the mentioned issues with influence functions in the manuscript in §3, under paragraph *TDA methods likewise estimate random quantities*.
Title: Response to reviewer (1/2)
---
Reply to Comment 1.1.2:
Title: Response to reviewer (2/2)
Comment: > In practice, these shortcoming are often circumnavigated by either continuing to train from an optimized checkpoint, or by keeping all but the last layer parameters frozen (e.g., Koh and Liang), effectively leading to a logistic regression model over a fixed feature extractor. Neither of these approaches are examined or discussed.
The mentioned approaches (i.e. continuing to train from optimized parameter, freezing the model up to the last layer) reduce the stochasticity in the parameter. However, as the reviewer would be aware from _Bae et al._’s work (_If Influence Functions are the Answer, Then What is the Question?_), these approaches lead to a gap between influence scores and what they set out to measure, which is the leave-one-out retraining (LOO). In our study, we aim to explore the variance in TDA scores based on the original LOO definition, and observe how various TDA approximation methods capture this variability.
> “We add a damping term λ to ensure the positive definiteness (PD) of Hessian”: This damping term is an example of something that may be interesting to examine theoretically, could it be connected to the prior in a Bayesian learning framework?:
The damping term could indeed be seen as an isotropic Gaussian prior centred at the origin. We thank the reviewer for pointing this out and will add this comment to the paper.
> When the dataset size experiments were increased from 3 observations to 6 observations the trend changed.
The relation between training set size and the variability of TDA scores is indeed not linear, and we will update §4.3 *Training set size* for the final version of the paper. We would like to correct this observation by stating that an increased training set size leads to an increased variability in TDA scores *up until a certain point*. We observe that TDA scores tend to be smaller with larger training sets:
| Training set size | Mean TDA score (LOO) | Mean variance of LOO |
| --- | -------------------- | -------------------- |
| 30 | 0.242 | 0.098 |
| 60 | 0.042 | 0.041 |
| 90 | 0.048 | 0.093 |
| 120 | 0.073 | 0.091 |
| 150 | 0.030 | 0.019 |
| 180 | 0.017 | 0.019 |
We believe this makes sense because a single training sample tends to attribute less to test samples overall, as the training set size increases. If all train-test pairs consistently exhibit low TDA scores, the variance in scores decreases, resulting in smaller p-values, which we observe as the drop in p-values for larger training sets, such as $|D|\in{150,180}$ in additional experiment 3 of the rebuttal pdf.
> What makes the authors believe that this claim about model complexity is more robust?
We believe that our claim about model complexity (more complex models tend to exhibit larger variability in their TDA scores) is robust to changes in the observed trend because we consistently observe that model complexity mainly affects the variance in TDA scores more strongly than the mean score itself:
| Model | Mean TDA score (LOO) | Mean LOO variance |
| ----------- | -------------------- | ----------------- |
| 2-Layer CNN | 0.030 | 0.019 |
| 3-Layer CNN | 0.047 | 0.045 |
| ViT+LoRA | 0.058 | 0.224 |
This observation aligns with the intuition of using a mostly frozen model to compute IFs (Koh & Liang, 2017): With more parameters, we increase the stochasticity of the training procedure, in turn diminishing the reliability of TDA scores.
> As far as I can tell, Equation 11 does not correspond to an unpaired, 1 sample student’s t-test.
We use a 1-sample Student’s t-test for a single random variable, TDA score $\tau$. There is no pairing of samples. Note that $z_j$ and $z$ are fixed variables in Equation 11.
> The samples are not IID. The “seed” samples from DE are arguably IID, but the subsequent samples from SWA are not. This should at least be discussed.
Sure, for the mixture-of-Gaussian posterior, we do not sample IID. Instead, we perform a version of stratified sampling, where we fix the number of samples from each centroid. Within each centroid, the sampling is IID, as the reviewer has pointed out. We believe that the stratified sampling approach used in this study is unlikely to introduce significant bias in our statistical analysis. This is because we have ensured an equal number of samples from each stratum, and the strata (centroids) themselves are exchangeable, meaning that their order does not affect the overall outcome of the analysis. We thank the reviewer for this critical observation. We will include this discussion. | Summary: This paper presents a Bayesian perspective on Training Data Attribution (TDA), a technique that identifies influential training data for model predictions. The authors propose treating the learned model as a Bayesian posterior and TDA estimates as random variables. This approach reveals that the influence of individual training data often gets overshadowed by noise from model initialization and batch composition. Consequently, they suggest using TDA only when certain training data consistently influences model predictions, despite noise factors.
Strengths: - Well written, easy to follow. Thanks!
- Studies an important practical problem and provides a practically relevant recommendations.
- Strong analysis of experiments
Weaknesses: - The paper identifies difficulties with current practice and provides some recommendations, it does not discuss a solution or paths towards a solution of the TDA problem.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: l 65: what do you mean by "more global"?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: n.a.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the encouraging review.
In the following, we answer the questions posed by the reviewer one by one:
### [The paper] does not discuss a solution or path towards a solution of the TDA problem.
Our focus is to identify issues with prior problem definition of the TDA task and to propose a novel problem definition that makes more sense in practice. In the process, we have shared interesting intuitions that may lead to effective solutions. For example, we have identified varying degrees of variance in different train-test pairs (Figure 3) and have recommended that future researchers study the low-noise train-test pairs, where the TDA estimates are stable (Section 4.5, line 298). Another path towards a more practical TDA technique is to study the factors affecting the inherent variance of TDA.
We will make this discussion more explicit in the final version of the paper.
### Line 65: What do you mean by “more global”?
We categorise a TDA method as local vs global based on the expected counterfactual impact of “altering a training sample $z_j$”. For example, the leave-one-out (LOO) re-training introduces a global impact on the model parameters, as the model is trained for multiple iterations without the training sample $z_j$. On the other hand, taking a single additional training step on $z_j$ is considered local, as the expected impact is more restricted.
We understand that this is not a well-defined terminology. We will edit the sentence as follows:
“(5) Observation that the TDA estimation methods capture local changes in the model with regard to the counterfactual question of “retraining without $z_j$”, while LOO retraining itself results in a more global change through the training procedure.”
We are happy to discuss this point and hope to find a phrasing that will improve the paper for better clarity here.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: I appreciate the explanation. | Summary: In this paper the authors investigate training data attribution (TDA) through a bayesian lens by explicitly considering the randomness in estimating the model parameters with and without a given training example. To generate approximate bayesian posteriors on model parameters the authors use deep ensembles and SWA, then consider multiple different methods of TDA. They find that, using a t-test, TDA estimates often exhibit high variance, indicating high signal to noise ratio. This variability is often dominated by noise from model initialization and seems to increase with training set size and model complexity. They also show strong consistency between different groups of TDA methods.
Strengths: The manuscript is well written and clear, and the problem setting important. Measuring the variability in downstream TDA estimates caused by variability in model parameters is an interesting perspective that adds to the growing body of literature aiming to understand various methods of TDA. The conclusions drawn by the authors regarding sources of randomness (initialization vs batching) seem to align with current intuitions around TDA estimates and present a new way to measure the quality of any proposed TDA method by considering those pairs for which the LOO estimates are statistically significant.
Weaknesses: One of the weaknesses is the small dataset sizes considered. The authors restrict datasets to 150 training examples and 900 test examples for MNIST and 500 train/test examples for CIFAR10. At these small dataset sizes, we might expect higher variability and correlation between training and test examples. For example, if the model only sees a single 3 written a particular way during training, then we might expect high variability in differently trained model’s predictions at a similar 3 in the test set. However, seeing many such 3s in a larger dataset should decrease the variability of posterior predictions. Although the conclusions drawn and trends observed at these small scales are interesting, it remains to be seen whether they hold for larger datasets. As a starting point, the authors should consider running the same experiments on a larger training set but consider only a small random subsample in their analysis. This should give a stronger signal for the behavior of the random variable $\tau$ for larger datasets.
The other main set of experiments that would greatly strengthen the paper is the consideration of the downstream tasks for which TDA is useful, for example in mislabel identification. Since these tasks may not depend on exact LOO estimates [1], the inherent variability in TDA estimates may or may not be a relevant consideration. It would be interesting to see the extent to which the variance of $\tau$ estimates makes these tasks difficult or even potentially ill-posed.
[1] Juhan Bae, Nathan Ng, Alston Lo, Marzyeh Ghassemi, and Roger B Grosse. If influence functions are the answer, then what is the question?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Is there a hypothesis for why so many more MNIST3 examples seem to have low p-values compared to CIFAR10 examples? It would be interesting to see whether these low p-value examples have some special properties that could be exploited to identify them efficiently.
- It seems that the model complexity analysis should be considered relative to the dataset we are attempting to perform TDA on. Is there any way to quantify this value? Although a ViT model may exhibit high $\tau$ variability on a simple dataset like CIFAR10 or MNIST3, it may behave differently on a much more complex datasets like ImageNet.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thorough review and recommendation to accept our work.
We wish to address the remarks and questions raised by the reviewer one by one:
### One of the weaknesses is the small dataset sizes considered.
We understand that considering greater dataset sizes would be desirable. However, the experiments are already computationally heavy when it comes to TDA evaluations that involve re-training of the whole network several times, each time leaving out a single training sample. We have retrained the model #train $\times$ #MC samples $=150\times 50=7500$ times for the MNIST3 experiments and $500\times 50=25000$ times for the CIFAR10 experiments. This surmounts to roughly 2 and 104 Nvidia 2080ti-hours per single TDA evaluation. This explains why TDA community has relied on apparently smaller-scale experiments: [a,b] have considered only 100 and 500 train-test pairs, respectively. In this work, we opt to work with smaller datasets and thorough consideration of all train-test pairs $(z_j,z)$. We believe that the key problem with previous TDA papers is the selection of certain train-test pairs to report the results on. Thanks to the exhaustiveness, we are capable of making conclusions based on the entire population of train-test pairs: we verified the prevalence of high-noise train-test pairs and the existence of very few low-noise pairs (Section 4.5, line 294).
### It remains to be seen whether they hold for larger datasets. As a starting point, the authors should consider running the same experiments on a larger training set but consider only a small random subsample in their analysis.
We ran an additional experiment following the reviewer’s suggestion: We trained a model on the full MNIST dataset and defined a smaller subset of train-test pairs (100$\times$10=1000) for studying TDA. Note that MNIST is not small for our experimental setting, where we compute LOO retraining. In fact, training the model once on one Nvidia 2080ti GPU cost around 7 min, which we did 100$\times$50 posterior samples = 5000 times, resulting in roughly 583 GPU hours. We confirm the main findings in the smaller-scale experiments: the ground-truth TDA values tend to show high (p>0.05) variance in general. Experimental details in the global response, experiment 1.
### The other main set of experiments that would greatly strengthen the paper is the consideration of the downstream tasks for which TDA is useful, for example in mislabel identification.
Following the reviewer’s suggestion, we performed a mislabel identification experiment similar to [a] and [c]. We find that the inherent stochasticity of TDA leads to a large range in mislabel identification performance and that high variance in the TDA estimates degrades downstream task performance. Experimental details in the global response, experiment 2.
### Is there a hypothesis for why so many more MNIST3 examples seem to have low p-values compared to CIFAR10 examples?
Our intuition is: MNIST3 is an easier task to learn than CIFAR10, which could result in a less complex loss landscape where there may be global optima that are easier to find. If model posteriors are sampled from the same optimum there could be less variation in the TDA scores.
### It would be interesting to see whether these low p-value examples have some special properties that could be exploited to identify them efficiently.
We agree that a study on the properties of low-noise train-test pairs would be interesting, which is also one of our main recommendations to the community. We do not have a strong hypothesis for the cause of this phenomenon yet, as the noise level seems independent of obvious features like classifier confidence. This is an interesting research question to be explored in the future.
### It seems that the model complexity analysis should be considered relative to the dataset we are attempting to perform TDA on. Is there any way to quantify this value? Although a ViT model may exhibit high 𝜏 variability on a simple dataset like CIFAR10 or MNIST3, it may behave differently on a much more complex datasets like ImageNet.
We agree that it makes sense to match the model complexity to the data complexity. We applied LoRA finetuning to reduce the effective model complexity of a ViT down to match the CIFAR10 dataset. Though it would be interesting to perform our analysis on ViTs trained on ImageNet, this is computationally prohibitive - it would require $10,000\times 50 = 500,000$ times retraining the ViT, which roughly corresponds to 2.4 GPU years. A realistic research scale at the moment is what we have presented.
[a] Understanding black-box predictions via influence functions (ICML 2017)
[b] FastIF (EMNLP 2021)
[c] Estimating training data influence by tracing gradient descent (NeurIPS 2020)
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. After reading the other reviewer responses and thinking more about this work myself I have a few more reservations.
The additional experiments address some of my concerns but it is still difficult to support the claims made since the noise could be stemming from many different sources that are difficult to disentangle in these settings. Although, I recognize the difficulty in attaining gold-standard LOO retraining TDA estimates, we know from [2] that LOO does not necessarily correlate with IF (and the other methods as well given the analysis in Figure 5), so comparing the two is a bit misleading, especially as neural network sizes grow larger and this discrepancy increases. One option is to use PBRF training, as in [1, 2], which we know corresponds well with IF. Applying the same analysis to this training regime would give us a more clear picture of the stochasticity of IF. As it stands, the observed variability is not so surprising when we consider what the existing TDA methods measure as compared to true LOO retraining.
Another consideration is that TDA methods are often used to analyze the predictions of a **specific** model, rather than a general class of models. In these cases we care only about the specific sample from the unknown model posterior and not the posterior itself. The suggested analysis above of the PBRF training regime would more closely align with this problem setting. In light of these concerns I will be adjusting my score down one point.
[1] Studying Large Language Model Generalization with Influence Functions. Roger Grosse ̊:, Juhan Bae ̊:, Cem Anil ̊ et al. 2023.
[2] Juhan Bae, Nathan Ng, Alston Lo, Marzyeh Ghassemi, and Roger B Grosse. If influence functions are the answer, then what is the question?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the great additional points to the original review.
> [...] it is still difficult to support the claims made since the noise could be stemming from many different sources that are difficult to disentangle in these settings.
We agree that in practice, the source of noise will be diverse and potentially difficult to disentangle. But in our experimental setting, we used random initialisation and batch composition as sources of noise, which could effectively be controlled:
“In a variant of DE with the initialisation as the source of randomness (\textbf{DE-Init}), we train each of $T_\text{DE}$ randomly initialised parameters $\theta^{(t)}\_0$ on either $\mathcal{D}$ or $\mathcal{D}\_{\setminus j}$. (...) We also consider the batch composition in stochastic gradient descent (SGD) as the source of randomness (\textbf{DE-Batch}). (...)” [§4.1, lines 181-186]
Controlling only these two factors are sufficient to support the claim that training process stochasticity (via random initialisation and batch composition) leads to variance in TDA scores [§4.5, lines 287-288].
> [...] we know from [2] that LOO does not necessarily correlate with IF [...], so comparing the two is a bit misleading, especially as neural network sizes grow larger and this discrepancy increases.
We thank the reviewer for initiating an interesting discussion around the seminal work of Bae et al. (2022). We are aware that IF (one of the tested TDA methods in this work) does not exactly correspond to LOO from Bae et al. (2022) and the discrepancy increases with larger networks, as the reviewer has described. Nevertheless, we strongly believe that the ultimate goal of TDA is to predict the counterfactual outcome of removing a training sample, rather than the modified PBRF objective in Bae et al. (2022). We thus argue that it is necessary to make an empirical comparison between the ground-truth LOO and approximate TDA methods like IF, precisely to quantify the aforementioned discrepancy. [§4.5, lines 291-293].
Please let us know if there is any misunderstanding on the reviewer’s comments from our side.
> [...] the observed variability is not so surprising [considering] what the existing TDA methods measure as compared to true LOO retraining.
We are not completely sure if we understand the reviewer’s comment correctly. Our understanding of the reviewer’s comment is: because existing TDA methods are very crude approximations of the true LOO retraining, it is indeed unsurprising that TDA methods show great variability. We believe there could be a bit of confusion. Our main point is that even the true LOO retraining, which is considered the ground-truth target for approximate TDA methods, exhibits stochasticity: “Generally, we observe many TDA measurements, ground-truth and estimations likewise, are unstable with non-significant p-values (> 0.05).” [§4.2, lines 204-205].
Because the ground-truth target is stochastic, our point is that we need to adjust our evaluation protocol to embrace the inherent stochasticity of the task and treat TDA values as random variables [§4.5, lines 287-293]. We measure the Pearson and Spearman correlations of TDA estimate’s mean and variance to study how well approximate TDA methods capture the ground-truth LOO scores in both mean and variance. We’d be happy to discuss further if we have not understood the reviewer’s comment correctly.
> Another consideration is that TDA methods are often used to analyze the predictions of a specific model, rather than a general class of models. In these cases we care only about the specific sample from the unknown model posterior and not the posterior itself.
This is a great point. Indeed, in practice, the starting point would be a fixed trained model. It would also make sense to use a fixed model trained on the original dataset $\theta_\mathcal{D}$, rather than the posterior $\theta\sim p(\theta|\mathcal{D})$. However, to answer the question “how does the model output change when a sample $z_j$ is removed from the training set?”, we should inevitably introduce multiple possibilities for the counterfactual model $\theta_{\mathcal{D}\setminus j}$. There is no well-defined notion of a *unique* trained model obtained from re-training a model without a certain sample because the exclusion distorts the batch composition, for example. Such an ambiguity in $\theta_{\mathcal{D}\setminus j}$ is well-captured via Bayesian model posterior $\theta\sim p(\theta|\mathcal{D}_{\setminus j})$. We extend this posterior viewpoint to the model built from the original dataset $\theta\sim p(\theta|\mathcal{D})$, but one could still consider treating this as a fixed variable. We will discuss this option in the final version.
We are glad to be able to continue the constructive discussion. We will incorporate the discussion here to the final manuscript. In case there is any misunderstanding from our side or the reviewer has any follow-up questions, we will remain available. | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive comments and suggestions. Reviewers agree that we “study an important practical problem” (fmRD, CbTs) and provide “thorough experiments” (wqEs) and a “strong analysis” (fmRD).
We have addressed individual reviewers’ comments and questions in the dedicated responses. We use the global response to show the results of experiments requested by the reviewers.
### 1: Larger dataset (fqXw, CbTs)
Summary: We perform the test for statistical significance of TDA given a model trained on MNIST ($|D|=60,000|$). We base the statistical analysis on a small random subsample and find high (p>0.05) variance in ground-truth TDA as well as TDA estimates, in line with previous findings.
Setting:
We train a 2-layer CNN on the full MNIST dataset and sample $T=50$ times from the posterior. For analysis of TDA methods, we uniformly sample 100 $z_j$ from the training set and 10 $z$ from the test set, resulting in 1000 train-test pairs.
Results:
The model has a predictive accuracy on the MNIST test set of $0.979 \pm 0.001$ at 95% CI. The p-values resulting from the TDA analysis are:
LOO |ATS |IF |GD |GC
-----|-----|-----|-----|-----
0.761|0.362|0.464|0.475|0.247
Distribution of p-values for LOO and correlation analysis is attached in the global response PDF.
The results show that TDA estimates vary strongly for the small subset of train-test pairs (high p-values). Given our previous findings, we find two main reasons: (1) MNIST is larger in training set size than the initial sets we used. The attribution of one sample to model behavioris likely to be marginal and unstable. (2) Low-noise samples exist, but they are in the minority. This experiment shows that LOO is inherently noisy and TDA approximation methods fail to capture this. This conclusion aligns with the results in the submission.
### 2: Mislabel identification (wqEs, CbTs)
Summary: We intentionally mislabel parts of the training dataset and aim to identify the mislabeled samples using TDA. We find that the inherent stochasticity of TDA leads to a large range in mislabel identification performances.
Setting:
We perform this experiment with the 2-layer CNN and MNIST3 ($|\mathcal{D}|=150$) and CIFAR10 ($|\mathcal{D}|=500$). We follow the procedure from Koh & Liang [a]: First, a random 10% of the datasets are mislabeled by the highest scoring incorrect label. We train the model using these mislabeled datasets (sample $T=50$ times from the posterior). Then, we compute self-influence, which is the attribution of a sample to itself $\tau(z_j, z_j)$ [a], with each TDA method. The mislabeled dataset is ranked according to self-influence and the quantity of interest is the fraction of mislabeled samples found when inspecting the top x% of the ranked dataset.
In the analysis, we inspect (1) the range of mislabeled fractions discovered if we treat TDA deterministically, i.e. we compute the discovered fraction per $t\in T$ and report the range; (2) the fraction of mislabeled samples discovered when we use the mean over the TDA scores of our posterior samples.
Results:
The results plots are visualized in the global response PDF.
We find that deterministic TDA results in a large range of possible outcomes for identifying mislabeled samples. This means that it is harder to reliably identify mislabels when TDA is treated as point estimates.
### 3: Training set size (fqXw)
Summary: Ablations with training set sizes, expanding on the experiment from section 4.3 “Training set size”, show that there may be a point of “stochasticity saturation” where the increasing size of the trainset does not contribute more to the variance in TDA estimates.
Setting:
We expand the experiment with the 2-layer CNN on MNIST3 and CIFAR10 from section 4.3 Training set size by testing the model trained on MNIST3 of sizes 90, 120, 180 and CIFAR10 of sizes 300, 400, 600.
Results:
Updated plots of Figure 4 in global response PDF.
The results indicate that an increase in training set size does not necessarily lead to an increase in noise (i.e. the relationship is not linear). Instead, we observe that there may be a point of “stochasticity saturation” where the increasing size of the training set does not contribute more to the variance in TDA estimates (p-values stay large, but don’t increase).
We will update the discussion in paragraph 4.3 “Training set sizes” to reflect this for the final version.
### 4: Model complexity (fqXw)
Summary: An additional experiment with a 3-layer CNN shows that model complexity is likely a factor in the reliability of TDA estimates.
Setting:
We train a 3-Layer CNN with 620,362 trainable parameters on MNIST3 ($|\mathcal{D}|=150$), CIFAR10 ($|\mathcal{D}|=500$) to analyse TDA methods. We chose this model as it is comparable to the 2-layer CNN in terms of architecture and to the ViT+LoRA model in terms of trainable parameters (597,514).
Results:
P-values (Distributions of p-values for ground-truth TDA (LOO) in the global response PDF.)
p-values|LOO |ATS |IF |GD |GC
--------|-----|-----|-----|-----|-----
MNIST3 |0.370|0.368|0.464|0.470|0.005
CIFAR10 |0.687|0.432|0.579|0.581|0.365
The results show that model complexity is likely a factor for the stochasticity in TDA estimates, where increasing model complexity (architecture and number of trainable params) means increasing variance, in line with previous findings. We also note that the number of low-noise train-test pairs decreases with increasing model complexity, but they still exist.
[a] Understanding black-box predictions via influence functions (ICML 2017)
Pdf: /pdf/c8dfbd6d41a75e0a6bfd00bc54dd6073d4aa6307.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Aleatoric and Epistemic Discrimination: Fundamental Limits of Fairness Interventions | Accept (spotlight) | Summary: The manuscript introduces FairFront i.e., an estimation for the upper bound on the Pareto Frontier for Fairness and Accuracy. The authors empirically show the tightness of this bound by showing how SOTA approaches perform close to the FairFront, while the gap may be attributed to the distributional variations.
Strengths: - The authors tackle the problem of finding the upper bound on the Pareto frontier in contrast to prior works which focus on the lower bound.
- The approach presented in the manuscript is theoretically grounded (by the essence of using Blackwell's results).
- SOTA approaches are used to show the tightness of the bound empirically.
Weaknesses: - The biggest weakness is that epistemic and aleatoric discrimination are terms introduced in the paper which have a link to the corresponding concepts in the uncertainty literature. However, the way these terms are used contradicts the definitions of the uncertainty counterparts e.g., Aleatoric uncertainty is linked to missing data (line 14-15, 65-66, 250-251). In prior literature, the lack of data falls in the domain of epistemic uncertainty [1][2][3].
- In the manuscript, Aleatoric discrimination is linked to distributional differences. However, prior work either differentiates it from both aleatoric and epistemic uncertainty[4] or considers it as a part of epistemic uncertainty [5 (Section 9.2 dedicated to this topic)].
- It is mentioned that distributional differences are largely a grey area in prior work. However, works such as FairBatch [6] and FairMixup [7], tackle such cases and are ideal for inclusion in the experiments.
- The manuscript is a bit hard to follow. Even though it has content for describing which components are used formally, there is little intuitive background. Some links are hard to follow. e.g. it is not immediately obvious that $(S, Y) - X - \hat{Y})$ refers to the Markov chain in Definition 2. It is not evident why this is used as the Markov chain.
[1] Swiler, Laura P., Thomas L. Paez, and Randall L. Mayes. "Epistemic uncertainty quantification tutorial." _Proceedings of the 27th International Modal Analysis Conference_. 2009.
[2] Shaker, Mohammad Hossein, and Eyke Hüllermeier. "Aleatoric and epistemic uncertainty with random forests." _Advances in Intelligent Data Analysis XVIII: 18th International Symposium on Intelligent Data Analysis, IDA 2020, Konstanz, Germany, April 27–29, 2020, Proceedings 18_. Springer International Publishing, 2020.
[3] https://docs.aws.amazon.com/prescriptive-guidance/latest/ml-quantifying-uncertainty/epistemic-uncertainty.html
[4] Amini, Alexander, et al. "Deep evidential regression." _Advances in Neural Information Processing Systems_ 33 (2020): 14927-14937.
[5] Varshney, Kush R. "Trustworthy Machine Learning." _Chappaqua, NY_ (2021).
[6] Roh, Yuji, et al. "FairBatch: Batch Selection for Model Fairness." _International Conference on Learning Representations_.
[7] Mroueh, Youssef. "Fair Mixup: Fairness via Interpolation." _International Conference on Learning Representations_. 2021.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: It is hard to see intuitively why FairFront works e.g., why is the markov chain $(S, Y) - X- \hat(Y)$? How does it filter out valid classifiers?
I acknowledge that I read the response and I feel overall the rebuttal was satisfying.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: No.
- From Lemma 1, it appears that one of the limitations of this work is that the FairFront is only applicable to convex classifiers.
- I would highly suggest renaming the terms 'epistemic' and 'aleatoric' discrimination since they do not portray the corresponding uncertainty counterparts accurately and may lead to confusion. Terms such as 'Distributional Discrimination' may be more fitting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s their careful reading of our paper and thoughtful comments!
---
**Q1. Epistemic and aleatoric discrimination and their link to uncertainty literature.**
A1. We thank the reviewer for highlighting this crucial point. Below, we discuss their link to uncertainty literature.
In terms of their definitions, epistemic uncertainty arises from a lack of knowledge about the best model, such as the Bayes predictor, while epistemic discrimination results from a lack of knowledge about the optimal fair predictive model. On the other hand, aleatoric uncertainty is the irreducible part of uncertainty caused by the random relationship between input features and label, while aleatoric discrimination is also the irreducible part due to inherent biases in the data-generating distribution.
In terms of their characterization, epistemic uncertainty can in principle be reduced by including additional information; epistemic discrimination can be reduced in a similar approach by e.g., adding more data, as a data scientist can choose a more effective fairness-intervention algorithm with access to more information.
In the infinite sample regime, a consistent learner will be able to remove all epistemic uncertainty, assuming the model class is large enough and there are no computational constraints. Analogously, we demonstrate in Fig. 5 that when the underlying distribution is known, SOTA fairness interventions are able to eliminate epistemic discrimination as their fairness-accuracy curves are close to the fair front.
You are right: the lack of data falls in the domain of epistemic uncertainty. However, the missing data we are discussing is not about missing rows, but rather missing feature values. Indeed, this aligns with the literature on uncertainty, see Fig. 5 on page 466 of [Hüllermeier and Waegeman, 2020] for a discussion on how missing feature values can amplify aleatoric uncertainty.
References:
--Hüllermeier, E. and Waegeman, W., 2021. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods.
---
**Q2. Aleatoric discrimination and distributional differences.**
A2. We’d like to clarify that distributional shift is a form of epistemic discrimination. To understand this, recall that aleatoric discrimination is only based on the properties of the deployed data distribution. It is quantified under the assumption of having complete knowledge of this data distribution (that is, as if an infinite amount of deployed data is available). In contrast, distributional shift arises due to the imperfect knowledge of the deployed data distribution as we can only observe $P_{train}$ which differs from $P_{deploy}$. Hence, by observing more samples from $P_{deploy}$, $P_{train}$ would be “closer” to $P_{deploy}$, and as a result, fairness interventions trained on $P_{train}$ and tested on $P_{deploy}$ could demonstrate an improved fairness-accuracy curve. This would lead to a reduction in epistemic discrimination.
---
**Q3. Distributional differences are largely a grey area in prior work. However, works such as FairBatch and FairMixup, tackle such cases and are ideal for inclusion in the experiments.**
A3. We thank the reviewer for pointing out the missing references, which will be cited in the revised paper. As mentioned in A2, distributional difference is related to the form of epistemic discrimination. Hence, these two works align with the line of fairness interventions aimed at reducing epistemic discrimination, and we will discuss them in the updated paper.
---
**Q4. Little intuitive background. Not obvious that $(S,Y) – X – \hat{Y}$ refers to the Markov chain in Def. 2.**
A4. Thank you for highlighting the issue and we will provide more background information in the revised paper. Regarding the Markov chain, it is a rather simple condition and holds if $\hat{Y}$ is generated from $X$ (that is, the classifier only uses $X$ as input). This is always the case in practice – otherwise the classifier would be rather trivial (i.e., it would use $Y$ to predict itself). Also, observe that $S$ can be incorporated as a feature of $X$, so there is no loss of generality in this assumption.
---
**Q5. Hard to see intuitively why FairFront works e.g., why is the markov chain $(S,Y) – X – \hat{Y}$? How does it filter out valid classifiers?**
A5. Here is a short summary about how $FairFront$ works:
First, computing $FairFront$ directly is intractable as it requires optimizing over a large, or even infinite-dimensional function space. To circumvent this issue, we write $FairFront$ as a function of a transition matrix $P_{\hat{Y}|S,Y}$ and use $\mathcal{C}$ to denote the set of all $P_{\hat{Y}|S,Y}$ that correspond to a feasible classifier. This is a much lower dimensional problem since it does not directly depend on the cardinality of the input features $X$. The markov chain $(S,Y) – X – \hat{Y}$ eliminates the transition matrices that are not associated with a feasible classifier (please refer to Remark 2 in Appendix B.4 on page 17 for an illustrative example). By leveraging Blackwell’s results, we approximate the convex set $\mathcal{C}$ via piecewise linear functions with coefficients $a_i$. Algorithm 1 outlines a method that alternatively tightens the approximation of $\mathcal{C}$ and computes $FairFront$ under this approximation.
---
**Q6. From Lem. 1, it appears that the FairFront is only applicable to convex classifiers.**
A6. We would like to clarify that FairFront characterizes the fundamental fairness-accuracy trade-offs among ALL types of classifiers, not just the convex ones. The function $\phi$ in Lemma 1 is not a classifier but rather will be approximated by piecewise linear functions in Algorithm 1 for characterizing $\mathcal{C}$ in Definition 2.
---
**Q7. Epistemic and aleatoric discrimination and the corresponding uncertainty counterparts.**
A7. Please refer to our response to your Q1 and Q2.
---
Rebuttal Comment 1.1:
Comment: A1:
Thanks for the clarification. The missing features makes sense in the case of aleatoric uncertainty. Can a link be defined in terms of this uncertainty and Fairfront mathematically? Or is the distribution used as a surrogate to link them?
A2:
Since you agree that distribution shift is a form of epistemic discrimination, how would aleatoric discrimination be linked to Fairfront in light of this? As per the rebuttal of A1, I am having trouble drawing a formal link between Fairfront and aleatoric uncertainty.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your response!
A1. Below, we revisit the mathematical definition of aleatoric uncertainty according to Section 2.2 on page 463 of [Hüllermeier and Waegeman, 2020]. We also recall the definition of FairFront and contrast the two based on their definitions.
---Given the probability distribution of the deployed data $P_{X,Y}$ and a loss function $\ell$, the Bayes optimal classifier is defined as
$$
f^* := \arg\min_f E_{P_{X,Y}}[\ell(f(X)), Y]
$$
where the minimization is over all measurable functions $f:\mathcal{X} \to \mathcal{Y}$. Then aleatoric uncertainty is defined as the uncertainty of applying $f^*$ to predict the outcome of a new test point $x_{test}$.
---The Bayes optimal *fair* classifier is a naturally extension of $f^*$ with discrimination control taken into account:
$$
f_{fair}^* := \arg\min_{f} E_{P_{X,Y}}[\ell(f(X)), Y]\ s.t. \text{DiscVio}(f) \leq \alpha
$$
where $\text{DiscVio}(f)$ measures the discrimination violation of the classifier $f$ (see Table 1 for some examples). The value of $f_{fair}^*$ will inherently depend on the fairness level $\alpha$, leading to a Pareto Frontier of fairness and accuracy, and this frontier is exactly FairFront.
Both aleatoric uncertainty and FairFront, by definition, assume full knowledge about the data distribution $P_{X,Y}$. Hence, they are considered irreducible and cannot be diminished by collecting more data. However, incorporating more features into the model can enhance the performance of the Bayes optimal classifier, since you are inherently changing the data distribution by changing $X$. Similarly, if $X$ is observed through a noisy process (e.g., entries of $X$ are erased), then reducing the noise (e.g., number of erasures) – which would again change $P_{X,Y}$ – would affect aleatoric uncertainty. Similarly, it also impacts the fairness-accuracy curve of the Bayes optimal *fair* classifier. Hence, including more features in the model can help reduce both aleatoric discrimination (delineated by FairFront) and aleatoric uncertainty.
Regarding missing values, consider the following example directly inspired by the HSLS dataset used in our paper. Data is collected from anonymous student questionnaire answers ($X$) in order to predict student performance and school dropout risk ($Y$). Each questionnaire is assumed to be independent and drawn from the same distribution $P_{X,Y}$. If there are a limited number of students to query, the limited sample size translates to imperfect knowledge of $P_{X,Y}$, and hence epistemic uncertainty/discrimination. This uncertainty could be reduced by querying more students, thus increasing sample size and rendering a more precise estimate of $P_{X,Y}$ and of the best prediction accuracy for a given fairness level.
Now, assume that certain students may be reluctant to answer some questions in the questionnaire. For example, students whose parents did not go to college may leave the section on parents' education completely blank. In other words, for each questionnaire, some questions may be left blank, i.e. $X$ may include erased features. This cannot be overcome by querying more students, since missing features are built into the data generating distribution $P_{X,Y}$. In this case, missing values consist of *aleatoric* uncertainty. Of course, we could probe a student and ask them to complete the questionnaire but, since in our example answers are anonymous, we do not have the option. Collecting data from additional students would not resolve this issue, and the missing features are part of the aleatoric uncertainty/discrimination.
In summary, it is important to distinguish the sources of missing data in terms of limited samples vs missing features:
--- Lack of data due to *limited sample:* $P_{X,Y}$ is not known exactly, leading to epistemic uncertainty/discrimination. In this case, collecting more samples from $P_{X,Y}$ reduces epistemic uncertainty/discrimination since it leads to a more precise estimate of the distribution.
–- Lack of data due to *missing features:* the features $X$ can be missing. In this case, since the data generating distribution $P_{X,Y}$ itself may yield missing features, this uncertainty cannot be reduced by drawing more samples from $P_{X,Y}$. Here, missing features are part of aleatoric uncertainty and discrimination.
**(continued in the next comment ...)** | Summary: This paper splits discrimination in machine learning into aleatoric (which is that inherent to the data distribution), and epistemic (which is that due to choices in the model). They use Blackwell’s results to characterize the fairness Pareto frontier curve. Then, on 4 datasets with 5 fairness interventions, they characterize how close to their upper bound the algorithms are able to achieve.
Strengths: - Comprehensive set of experiments on a number of datasets and fairness interventions
- Clear writing and presentation of work
- Claims are backed up by theories and proofs
Weaknesses: - Given that the output from the algorithm (L236) is always the upper bound for FairFront, rather than exactly FairFront, I wish the paper would have been more upfront with this throughout, especially in the introduction and abstract, because I found this a bit misleading
- Even though the names of aleatoric and epistemic discrimination are taken from prior work, I think they should be potentially rethought because for example, “epistemic discrimination” feels like a term that should be restricted to the philosophical foundations of the word “epistemology,” e.g., as it is used in epistemic injustice (https://academic.oup.com/book/32817), rather than a not-so related mathematical constraint
- Given that a primary differentiation from prior work (L76) is that it works on multiclass classification as well as multiple protected groups, I would have liked to see empirical results on this rather than just claiming this is true
- Given how similar the motivation of this work is to Chen, I. et al. (NeurIPS 2018)’s, I would have liked to see a deeper comparison of the findings of both works
- “Discrimination” as used in this paper is only restricted to different forms of measurement disparities, e.g., statistical parity, equalized odds, and excludes many other forms of algorithmic discrimination that are more structural. So, sentences like L35 “We divide algorithmic discrimination into two categories” should be caveated that this is only a very specific form of algorithmic discrimination that is being divided into two categories (https://aclanthology.org/2020.acl-main.485/)
- As further explanation, taxonomies of algorithmic discrimination are its own genre of normative research, for example the distinction between representational vs allocational harms (https://www.youtube.com/watch?v=fMym_BKWQzk), or a taxonomy of harms within representational harms (https://arxiv.org/abs/2305.01776)
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Listed above in weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: I think that the authors could be more upfront about the limitation that their work maps out the upper bound of the FairFront rather than the FairFront exactly, and also that they have taken a rather narrow lens of fairness, neglecting broader societal concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for the kind comments and the encouragement!
---
**Q1. Algorithm (L236) is always the upper bound for FairFront. More upfront with this throughout.**
A1. Thank you for raising this concern. We will clarify that we provide an upper bound estimate of $FairFront$ in the introduction and abstract.
---
**Q2. The names of aleatoric and epistemic discrimination.**
A2. We appreciate your insightful comment regarding the potential ambiguity associated with our terminology. We will address this confusion in the revised paper. For clarity, we borrow these terms from the uncertainty literature (see Appendix D.2). Therein, epistemic uncertainty refers to the reducible uncertainty stemming from a lack of knowledge about the best model, while aleatoric uncertainty is the inherent, irreducible uncertainty due to random associations between input features and labels. Analogously, epistemic discrimination arises from a lack of knowledge about the optimal fair predictive model, whereas aleatoric discrimination pertains to the irreducible aspects attributed to inherent biases in the data-generating distribution.
---
**Q3. Empirical results on multiclass classification and multiple protected groups.**
A3. Please refer to Figure 4 in Appendix where we conducted experiments on the HSLS dataset with 4 groups and 5 labels. In short, our observation is consistent: state-of-the-art fairness interventions are effective at reducing epistemic discrimination as their fairness-accuracy curves are close to FairFront.
---
**Q4. Deeper comparison with Chen et al. (NeurIPS 2018).**
A4. Chen et al. (2018) decomposed group fairness measures into bias, variance, and noise in their Theorem 1 and proposed strategies for reducing each term accordingly. There are two notable differences compared with their work.
First, Theorem 1 in Chen et al. (2018) provides a decomposition for only a *single* group fairness metric. In contrast, our FairFront provides a more comprehensive analysis, characterizing the fundamental trade-offs between *multiple* group fairness metrics and *accuracy* among all classifiers. This analysis is more technically complex, as the interactions between different group fairness metrics and accuracy can incur tradeoffs, or in some cases, may even be mutually exclusive (as evidenced by the impossibility results [Kleinberg etal., 2016; Chouldechova, 2017]).
Second, the applications of our method are different from Chen et al. (2018). We specifically applied our method to benchmark existing fairness interventions, showcasing their effectiveness in eliminating epistemic discrimination across group fairness metrics. The analysis in Chen et al. (2018) is not suited for this purpose, as it solely considers a singular fairness metric. Additionally, we studied how the presence of missing values can impact aleatoric discrimination, thereby diminishing the effectiveness of fairness interventions. Note that the challenge posed by missing values has been generally neglected in the existing literature, including in the work of Chen et al. (2018). We hope that our efforts can inspire further research into this subject, contributing to the development of algorithms aimed at reducing biases under these conditions.
---
**Q5. “Discrimination” used in this paper is only restricted to different forms of measurement disparities.**
A5. We appreciate the reviewer for highlighting this crucial issue. We will explicitly state that our focus is limited to a specific form of algorithmic discrimination, namely, performance disparity across protected groups; other types of algorithmic discrimination or situations where group attributes are not clearly defined would require a non-trivial extension of our results.
---
**Q6. Taxonomies of algorithmic discrimination are its own genre of normative research, for example the distinction between representational vs allocational harms, or a taxonomy of harms within representational harms.**
A6. We thank the reviewer for pointing out these important references. As mentioned above, we will highlight that our focus is on statistical aspects of a specific form of algorithmic discrimination (performance disparities). We will also add the references you mentioned, specifically on the distinction between representation and allocational harms, as well as on active research on disentangling the range of normative concerns often bundled as "unfairness" or "discrimination." We will also highlight that, as posed by Katzman et al. (2023), no single measurement approach for "fairness" is definitive, particularly across different contexts and use cases.
---
**Q7. More upfront about the limitation that their work maps out the upper bound of the FairFront. A rather narrow lens of fairness, neglecting broader societal concerns.**
A7. Thank you for raising these important points. We hope our response in A1, A5, and A6 has addressed all of your comments. Please feel free to let us know if you have any additional comments that can help us further improve the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for responding to my comments. Upon reading the other reviews, I hope that the authors will be very explicit about the limitations of what kind of “fairness” they cover in their work, as you have stated in your rebuttal. I appreciate your detailed response for the comparison to Chen et al. (2018), and hope that some of it can be in the main paper as well. I will keep my rating as it originally is.
---
Reply to Comment 1.1.1:
Title: Thank you for your prompt response!
Comment: Thank you for your prompt response! Yes, we will make sure to explicitly state what kind of "fairness" measures covered in this paper and include a detailed discussion with Chen et al., 2018 in the revised paper. Finally, we would like to express our appreciation once more for the insightful and constructive comments you provided. | Summary: This paper proposes a decomposition of discrimination (in ML classifiers) into aleatoric (irreducible) and epistemic (reducible) components. The paper surveys related work in fairness. It then introduces and discusses the Fairness Pareto Frontier, which is essentially an upper bound on the accuracy of the best possible fairness constrained classifier. The paper proposes an algorithm for estimating aleatoric discrimination (this upper bound) which utilizes Blackwell’s results on comparing statistical experiments. It then demonstrates the applicability of the algorithm through experiments on 5 datasets. Finally, it identifies missing values in the dataset as a source of aleatoric discrimination, providing experimental evidence to support the discussion.
Strengths: Though I was admittedly not familiar with Blackwell’s results, the paper’s technical contributions are insightful and convincing.
The paper makes a novel connection to statistical results that (as far as I am aware) had not been considered previously in fairness research.
The theoretical framework and results, as well as the estimation method are applicable to a wide range of fairness-constrained predictive modelling problems (multi-class, multi-attribute), and thus are a substantial contribution to the field.
The paper does a good job of surveying related work, and distinguishing its contributions.
The writing is concise and of high quality.
Weaknesses: The paper is dense and would benefit considerably from an illustrative example, helping to build intuition about the terms and concepts introduced. Specifically to help digest the main theoretical result, Theorem 1, and the piece-wise linear approximation.
The paper presents FairFront as an upper bound whose estimate depends on k, and T. However, for a given data distribution, it is unclear how the k-dependence is affected by the quantity of available data. As a practitioner, I would want to understand the robustness of this upper bound to data availability. Some experiments that subsample the dataset would be a helpful starting point. Perhaps a version of the experiment discussed in Appendix C.2 with varying levels of data availability (up to the infinite data regime).
The region described as epistemic discrimination (if Figure 1) is effectively a gap between an upper bound (FairFront) and a lower bound (some fairness-constrained modelling approaches). As such, the true fairness Pareto Frontier is somewhere in between. It is unclear why the reader should interpret FairFront as the more accurate bound, and thus interpret the gap as epistemic discrimination (rather than FairFront estimation error). That is to say, the paper could elaborate on the tightness of this approximate upper bound based on choices of k and T.
The use of transparency in Figure 2 makes it difficult to compare scenarios (missing probabilities), especially when printed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: When the fairness constraints are trivially satisfied, for example $\alpha_{\text{SP}}, \alpha_{\text{EO}}, \alpha_{\text{OAE}} = \inf$, how should one interpret the FairFront?
Line 231: what is meant by “mostly violated”?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The paper refers to fairness and discrimination, but makes no effort to connect these concepts to the harms or benefits experienced by real people. This is fairly common practice in the field and not unique to this paper. However, it is nonetheless a noteworthy limitation. That is to say, one cannot truly comment on the fairness (or discrimination) of a machine learning system without a broader understanding of the social impacts–beyond just its predictions.
It is unclear whether the method can be applied in the numerous important situations whereby the prediction outcome is unobserved (or partially observed), and what effects estimating these counterfactuals would have on the method, e.g., in lending–where the outcomes of the rejected cohort are counterfactuals.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review and for appreciating the merits of the work!
---
**Q1. An illustrative example, helping to build intuition about the terms and concepts introduced.**
A1. We thank the reviewer for this valuable suggestion. One concrete example is the COMPAS dataset. It has been overused in the field, leading to recent calls to move on from it as a benchmark. Nevertheless, since its public release, hundreds of group fairness interventions have been proposed and benchmarked on this dataset, amounting to hundreds of achievable fairness-accuracy pairs. A natural question to ask is: are we close to the optimal Pareto Frontier on this dataset? FairFront – a theoretical upper bound for the *best* achievable fairness accuracy frontier – proves that we are indeed close to the optimal frontier when these datasets reflect the true underlying distribution. This provides theoretical backing to the sentiment of moving on from such datasets and optimizing solely for group fairness and accuracy.
Please also refer to Appendix B.4 for two technical examples that illustrate our results in special cases (Remark 2: $X$ and $(S,Y)$ are independent; Remark 3: $X$ is discrete).
---
**Q2. The estimate of FairFront depends on $k$ and $T$. It is unclear how the k-dependence is affected by the quantity of available data.**
A2. This is a great suggestion! Regarding k-dependence, we conjecture that $k = A * C$ should suffice, where $A$ is the number of protected groups and $C$ is the number of labels. While Blackwell proved this result for $k=2$ in his 1953 paper's Theorem 10, he didn't extend his proof for a general $k$. In the context of infinite data, our experiments have numerically verified this conjecture, and it has consistently held true. As for T-dependence, we observed in our experiments that setting $T = 20$ always ensured algorithm convergence.
---
**Q3. The gap between an upper bound (FairFront) and a lower bound (fairness-constrained approaches). The tightness of this approximate upper bound based on choices of $k$ and $T$.**
A3. You are right: the gap can be the result of the estimation error or epistemic discrimination. Nonetheless, we observed in our experiments that the estimation error tends to be quite small when $k\geq A*C$ and $T=20$. Broadly, the theoretical characterization of the estimation error of FairFront, as well as its dependence on $k$ and $T$ remains an open question. We will address this issue in the limitations section of our revised paper.
---
**Q4. The use of transparency in Figure 2.**
A4. We apologize for this inconvenience and will use distinct colors to represent different missing probabilities.
---
**Q5. When $\alpha_{SP}, … = \inf$, how to interpret FairFront?**
A5. When the fairness constraints are trivially satisfied, FairFront evaluates the accuracy of the Bayes optimal classifier.
---
**Q6. What is meant by “mostly violated”?**
A6. Suppose $P$ does not belong to $\mathcal{C}_k$. We will find a piecewise linear function that partitions the space into two regions. One region contains $P$ and the other one contains $\mathcal{C}_k$. “Mostly violated” means we construct this function by *maximizing* the distance between $P$ and the boundary defined by the function.
---
**Q7. The paper refers to fairness and discrimination but makes no effort to connect these concepts to the harms or benefits experienced by real people.**
A7. Thank you for highlighting this critical aspect. We fully agree: understanding the real-world impacts and harms on real people goes well beyond the limited technical and mathematical metrics often emphasized in ML research.
Our paper's main message is that existing fairness interventions optimized solely for group fairness and validated on overused datasets like Adult and COMPAS are provably nearing their theoretical best in terms of specific group fairness metrics and accuracy. Rather than just adding to such interventions, our results provide theoretical backing to your sentiment that it's time for the community to move on from incremental improvements on these benchmarks. For example, as we argue in Section 4.2, addressing real-world challenges, like data with missing values where these patterns correlate with group attributes, is crucial. Our findings partially close the chapter on the myopic approach of designing interventions optimized entirely for group fairness metrics and benchmarking on overused datasets: in this limited setting – for which hundreds of interventions have been produced – we are already close to the information-theoretic best!
You're right; we could have delved deeper into the broader social implications of our findings. In our final remarks, we'll stress the importance of moving beyond just numbers to consider the tangible effects on real individuals and the practical factors impacting data quality – this was indeed the core motivation for our work, and we hope to send a signal that we should focus on other problems beyond simply group fairness and accuracy on datasets such as COMAPS. We value your feedback and will emphasize this perspective in the revised manuscript.
---
**Q8. Whether the method can be applied when the prediction outcome is unobserved (or partially observed).**
A8. Thank you for pointing out this scenario. Indeed, our technique can be applied to estimate FairFront where prediction outcomes are unobserved or partially observed. This is because Algorithm 1 only requires a classifier $g$ that predicts $(S,Y)$ from $X$, along with the probability distribution of unlabeled data for computing the expectation. If the prediction outcome is unobserved, one could learn the function $g$ using a one-class classification method. However, we caveat that the approximation error of $g$ may increase, potentially affecting the accuracy of estimating FairFront.
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed response. I have read the other reviews and associated rebuttals. I stand by my score.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your response. We will make sure to include the promised changes in the revision (both in the main text and appendix). | Summary: The paper casts the issue of fairness in machine learning from the perspective of two classes of discrimination: those due to aleatoric uncertainty, or where the inherent limitations of the data distribution, and epistemic discrimination, which is due to modeling choices. It then uses this framework to analyze a common set of fairness metrics, showing that epistemic discrimination can be reduced by optimizing for these fairness measures, but not necessarily aleatoric discrimination.
Strengths: - The choice to disentangle sources of discrimination due to modeling choices vs. data properties is sound and reasonable.
- To the best of my knowledge, studying aleatoric discrimination via Blackwell's equations is novel (the reviewer is not familiar with this set of techniques, so can offer little here)
- The choice to study missing data values is natural.
Weaknesses: My main concern with this paper was that its contribution feels quite limited both in terms of thoroughness of the experiments (it would have been interesting to study different sources of data properties beyond miss values, such as Khani & Liang, 2020) do with spurious features, as well as its relevance in modern machine learning (the only data studied is the well-known COMPAS dataset, and it could have been interesting to consider situations where more advanced models -- such as language or vision models -- are applied, as well as settings where the motivation to apply ML is well justified unlike risk assessment).
I also think the authors could clarify quite early on they specific a particular subset of fairness interventions - group fairness with defined subgroup labels. This is not always possible when subgroups aren't clearly defined, challenging to measure, or are intractable.
In a similar vein, the writing could be more precise to properly convey the scope of FairFront. For example, sentences like " For instance, if the training set contains few samples from a given population group, then increasing sample diversity is a more effective strategy than selecting a more complex model class or training strategy." --> there are training methods that try to account for unequal group size (e.g. Hashimoto, 2018)) when group labels are inaccessible, and increasing sample diversity is hard. These sentences could just be qualified more that we are in a particular setting where we know the groups we want to be fair over, and can measure them.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: Are you able to run experiments that involve a bit more complex sources of data variability (e.g. a natural language task with different dialects, so more than 2 group labels, and using more state-of-the-art models like GPT series)?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: I think the paper could also be more clear on the importance of the overall result. Lacking in thorough experimentation, the paper largely presents a framework but does not argue for its utility. Do the authors intend their framework to be used as a rigorous benchmark? If so, they should focus on more comprehensive set of experiments. Otherwise, what is the core message of knowing that methods that are not data-specific do not address dataset-specific sources of discrimination?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the kind comments and the encouragement!
---
**Q1. Contribution feels limited both in terms of experiments and its relevance in modern machine learning.**
A1. Thank you for raising this important point. First, please note that we provide numerical results for **four** benchmark datasets: Adult, COMPAS, German Credit, and HSLS. Both German Credit and HSLS results are presented in Appendix C. We highlight that the HSLS dataset is a multi-class classification task with multiple protected groups that first appeared in the ML literature last year, and captures a common use-case of ML in education (student performance prediction, see Jeong etal., 2022). We also compared FairFront against **five** group fairness interventions (see 265-278). These include Reduction and FairProjection which are, to our knowledge, the state-of-the-art in terms of fairness-accuracy frontiers. The scale of our numerical results is comparable to recent comprehensive benchmarks of fairness interventions with publicly available reproducible code (cf. Alghamdi etal. in NeurIPS 2022).
We highlight that our choice of presenting Adult and COMPAS in the main text was strategic. These datasets, and especially COMPAS, have been extensively used as benchmarks in fairness research. Our findings on them show that there are *diminishing returns* in benchmarking new fairness interventions on these datasets and that existing methods approach the information-theoretically optimal Pareto frontier given by FairFront (see line 323). We believe this sends a pivotal, theoretically-grounded message to the community to innovate beyond these overused datasets. The choice of these two datasets for the main text was also practical: most interventions can be readily applied to them without change, allowing for more comprehensive benchmarks of existing methods against FairFront (see the five methods in Figure 1).
We certainly appreciate the suggestion on exploring more advanced models and other data properties. It aligns well with our vision, as we touch upon in lines 26-27 and in our final remarks. Our intent is to drive the field towards addressing fresh challenges in responsible ML rather than refining on established datasets. Again, a main contribution of our paper is that there is no room for improvement in terms of vanilla group fairness/accuracy values of these overused datasets. As a field, we should perhaps close this research chapter and move to new challenges in responsible ML such as the ones you mentioned.
---
**Q2. Group fairness with defined subgroup labels. This is not always possible when subgroups aren't clearly defined, challenging to measure, or are intractable.**
A2. Thank you for pointing this out – we will clarify this in the introduction. As we mentioned above, one of our main goals is to demonstrate that the subset of fairness interventions optimized solely for group fairness metrics where groups are well-defined appear to achieve the information-theoretic best on standard benchmarks. There are hundreds of such interventions proposed in the past decade, and we hope our results provide theoretical backing for the field to move on to more realistic and pressing issues such as the ones you suggest. The case where subgroups aren't clearly defined (or represented by a set of functions, such as in multiaccuracy and multicalibration) is not covered by our results, but is indeed an important research direction. We will stress this limitation in the final section.
---
**Q3. There are training methods that try to account for unequal group size (e.g. Hashimoto, 2018)) when group labels are inaccessible, and increasing sample diversity is hard.**
A3. You are correct, and we will qualify the scope of FairFront in a revised manuscript. Specifically, we will add that:
"When group labels are inaccessible or only partially accessible, increasing sample diversity can be challenging. In such cases, fairness interventions that account for only partially observed group attributes (e.g., Hasimoto, 2018) are a compelling alternative."
---
**Q4. Run experiments that involve more complex sources of data variability (e.g. a natural language task with different dialects, so more than 2 group labels, and using more state-of-the-art models like GPT series)?**
A4. Thank you for your suggestion about exploring tasks like natural language with varied dialects and using advanced models such as the GPT series. We note that our scope is tabular datasets, since this allows us to benchmark state-of-the-art fairness interventions against the theoretical optimum given by FairFront.
While gathering dialect-based datasets and training and/or probing LLMs is challenging given our limited resources, we reiterate the additional results in Appendix C that include other datasets (e.g., HSLS, where there are multiple groups). As we mentioned in the response to your Q1, a central theme in our paper is to demonstrate that existing fairness interventions are approaching the information-theoretic optimal fairness-accuracy Pareto frontier on widely-used datasets. We are excited to include your suggestion in our concluding section as a compelling direction for future work, particularly in light of our findings on the diminishing returns of focusing on overused tabular datasets.
---
**Q5. Importance of the overall result. Lacking in thorough experimentation. What is the core message?**
A5. Please refer to our responses to your Q1, Q2, and Q4. In short, the core message in this study is that there are *diminishing returns* in benchmarking new fairness interventions on standard (overused) datasets as existing methods are approaching the information-theoretically optimal Pareto frontier delineated by FairFront. We back this claim with numerical results from 4 benchmark datasets, tested against 5 state-of-the-art fairness interventions.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your response.
Q1. I appreciate the clarification regarding the strategic choice in choosing commonly used datasets (e.g. Adult) to highlight the limitation of studying these datasets given the FairFront bound. I don't think this messaging comes across in the current paper draft, however -- the focus of the paper seems more of a critique of broader approaches in the community to fairness interventions, rather than a specific critique of these simple datasets. If anything, I would thinking experiments with a more complex dataset and highlighting differences in results would drive home this point more. So I would encourage the authors to think more about the framing.
Q3. The purpose of this comment was more to highlight that there exist proposed methods in the literature that are *training* time interventions, not just dataset-level operations -- the current text seems to imply addressing unequal dataset issues is impossible via the training method, but modifies loss functions can capture this.
I have updated my score, but would strongly encourage the authors to explicitly state their choice to use their datasets in a revised version.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your response and follow-up comments!
Q1. Absolutely. In the revised paper, we will clearly state our main message: there are diminishing returns in benchmarking new fairness interventions on standard (overused) datasets as existing methods are approaching the information-theoretically optimal Pareto frontier delineated by FairFront. Additionally, we will articulate the rationale behind selecting commonly used datasets for our experiments.
Q3. Thank you again for highlighting this line of work. We will clarify our scope more clearly in the revised paper. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this manuscript, the authors make two main contributions to the technical study of algorithmic fairness.
Firstly, on the conceptual level, they propose to distinguish between aleatoric and epistemic discrimination. By the former term, they refer to the notion that the optimal achievable performance level may differ between different (e.g., racial or gender-based) groups, corresponding to notions of differing task difficulty. No algorithmic bias mitigation approach can resolve aleatoric discrimination, which is a property of a dataset, not a model. Epistemic discrimination, on the other hand, refers to a model that performs sub-optimally on a given group, compared to the performance level it could achieve on this dataset.
Secondly, the authors develop a new way to characterize an upper bound on the Fairness Pareto frontier, i.e., the frontier characterizing optimal trade-offs between overall model accuracy and various fairness constraints. The characterization is based on an old theoretical result by Blackwell and facilitates an efficient implementation by means of a greedy algorithm developed by the authors. This second contribution relates to the first in that it provides a lower bound on the level of aleatoric discrimination in a given dataset.
Several numerical experiments in standard (low-dimensional, tabular) algorithmic fairness datasets confirm the validity of the bound and the relative tightness of some previously proposed bias mitigation techniques, emphasizing that significantly better trade-offs are unlikely to be achievable in these datasets. An experiment with artificially induced missing data shows how this increases aleatoric discrimination, as would be expected.
Strengths: The paper makes several highly original and important contributions to the algorithmic fairness field.
The framing of different sources of algorithmic discrimination as separated into aleatoric and epistemic discrimination is novel, and it importantly helps in understanding and characterizing the limitations of bias mitigation techniques, which can only ever aim to reduce epistemic discrimination. While similar distinctions have been made before, I find the framing and formulation presented in the manuscript particularly clear and useful.
To my knowledge, the manuscript provides the first characterization of an *upper* bound on the Pareto frontier, the latter being an essential object of study in algorithmic fairness. It also treats the general case of both multiple sensitive groups and multiple output labels, which is relatively rare in the field (which mostly focuses on the binary setting for both.)
The manuscript introduces a novel theoretical tool, based on the old results of Blackwell, into the field of algorithmic fairness, which may spark further theoretical innovations beyond the present paper. I consider this alone to be an important creative contribution.
The paper is very well written, the results are clearly presented, and prior work is comprehensively discussed and acknowledged.
Weaknesses: I do not see many weaknesses in the manuscript, but there is one issue that I think deserves a more comprehensive discussion. This is related to the approximation of g, which maps inputs X to P(S,Y | X), where Y is the output label and S the sensitive group. As I understand it, the correctness of the approximated upper bound depends crucially on the correctness of this approximation. In the simple low-dimensional tabular data cases considered in the manuscript, g can be computed efficiently and precisely, but in the case of more complex (continuous, image?) domains, I would imagine this to be a serious limitation. Do I understand correctly that if g is approximated poorly, the computed bound may in fact not be an upper bound? I would appreciate a more comprehensive discussion of the impact of this approximation on the correctness of the estimated upper bound.
Also, can the authors provide some hints at important future extensions of the presented work, or important limitations of the approach? Section 5 is called "Final Remarks and Limitations", but it actually only summarizes the contributions of the present manuscript and lists no limitations.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. As outlined in detail above: what are the implications of the approximation on g? How does this impact potential other application domains, such as image analysis? Do I understand correctly that in the tabular cases considered in the paper, g is not approximated but in fact computed exactly?
2. I was a little confused by the notation in some places. For instance, the authors write that they "define FairFront(αSP, αEO, αOAE) as the solution of the following optimization problem", where the following optimization problem optimizes over a classifier h. However, later on, the authors use notation such as "FairFront1(αSP, αEO, αOAE) ≥ FairFront2(αSP, αEO, αOAE)", prompting me to ask: what exactly *is* "FairFront"? Is it the model? Its achieved accuracy? Something else? This could be made clearer by a notation such as "FairFront = arg max ...", for instance.
3. How does the complexity of the algorithm scale with, e.g., the number of sensitive groups to be considered? The authors emphasize that their approach "can be easily extended to the setting where multiple subgroups overlap"; how exactly would this work? In the case of multiple non-binary sensitive attributes, the number of subgroups to consider grows combinatorially; will that present practical problems?
4. To give practitioners some sense of the complexity of Algorithm 1, could the authors provide run time measurements for the case studies?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: 1. Like outlined above, the limitations induced by approximating g are presently not obvious to me; I could imagine these to be quite consequential if they effectively limit the applicability of the presented work to low-dimensional tabular data.
2. More generally, the authors do not discuss any limitations of their approach. One limitation that comes to my mind is that it has been shown repeatedly that fairness-accuracy trade-offs may be illusory in the case of group-dependent label noise or selection biases, see, e.g., Blum and Stangl (2020), Wick et al. (2019), Dutta et al. (2020). Another limitation seems to be that the presented approach relies on the availability, correctness, and validity of sensitive attributes, which is often not given; see, e.g., Jacobs and Wallach (2021) and Tomasev et al. (2021).
Blum and Stangl (FORC 2020): Recovering from Biased Data: Can Fairness Constraints Improve Accuracy? https://drops.dagstuhl.de/opus/volltexte/2020/12019/pdf/LIPIcs-FORC-2020-3.pdf
Dutta et al. (ICML 2020): Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing. https://proceedings.mlr.press/v119/dutta20a.html
Jacobs and Wallach (FAccT 2021): Measurement and Fairness. https://dl.acm.org/doi/10.1145/3442188.3445901
Tomasev et al. (FAccT 2021): Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities. https://dl.acm.org/doi/10.1145/3461702.3462540
Wick et al. (NeurIPS 2019): Unlocking Fairness: a Trade-off Revisited. Unlocking Fairness: a Trade-off Revisitedhttps://papers.nips.cc/paper/2019/file/373e4c5d8edfa8b74fd4b6791d0cf6dc-Paper.pdf
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments and for appreciating the novelty of the work!
---
**Q1. Approximation of $g$ and the impact on the estimated upper bound.**
A1. Yes, your understanding is correct! The approximation error of $g$ can influence the estimation of $FairFront$. If $g$ is not approximated accurately, the estimation might not serve as an upper bound any longer.
To circumvent this issue, we applied fairness interventions to the entire dataset in our experiments and subsequently resampled 30% of the data for the test set (see lines 276-278). In this case, $g$ can be precisely computed from the empirical distribution without any error, and FairFront gives an information-theoretic upper bound. Generally, for low-dimensional tabular data, we anticipate that the approximation error of $g$ won't be significant, given the relative ease of training well-calibrated base models like random forests to predict S and Y from X. We acknowledge that in other domains, this approximation error might be significant. A practical solution would be bootstrapping confidence intervals for refitting the curve over data splits. Nonetheless, a theoretical characterization of the extent to which this error impacts the estimation of $FairFront$ remains an open problem (please refer to our A2 for discussions about the limitations which will be included in the last section).
---
**Q2. Future extensions and important limitations.**
A2. Thank you for highlighting this matter. In response to your concerns, we provide a discussion on the limitations and potential future directions below. We will ensure this discussion is incorporated into the revised paper.
In this paper, we present an upper bound estimate for $FairFront$. However, it is important to note that this estimate may be subjected to errors originating from various sources. These include (i) the approximation error of the function $g$, (ii) estimation errors from computing the expectation in Eq. (6) with a finite dataset, and (iii) the influence of hyperparameters, $T$ (number of running iterations of Algorithm 1) and $k$ (number of segments in the piecewise linear functions). Regarding the dependence on $T$, our Theorem 2 ensures the algorithm's asymptotic convergence as $T \to \infty$. However, we have not established a proof for its behavior at a finite $T$. Regarding the dependence on $k$, we conjecture that $k = A * C$ should suffice, where $A$ is the number of subgroups and $C$ is the number of labels. While Blackwell proved this result for $k=2$ in his 1953 paper's Theorem 10, an extension of this proof to a general value of $k$ is an open problem.
We define aleatoric and epistemic discrimination with respect to the entire population. Investigating their per-instance counterparts and the relationship to individual fairness would be a compelling area of future inquiry. Additionally, a more nuanced analysis of aleatoric and epistemic discrimination is desirable, further breaking them down into fine-grained components. For instance, epistemic discrimination may be attributed to various factors including limited training data, noisy observations of labels or sensitive attributes, and constraints of learning algorithms. Characterizing each of these components and devising appropriate solutions to mitigate them can lead to a more comprehensive taxonomy of sources of bias and (un)fairness in classification and prediction. Lastly, further exploration of other evaluation criteria, such as scalability, generalization, and robustness against partial knowledge of group attributes in the context of benchmarking existing fairness interventions is a valuable avenue for future research.
---
**Q3. Implications of the approximation on $g$ and how this impacts potential other application domains.**
A3. Please refer to A1.
---
**Q4. What exactly is FairFront?**
A4. Thank you for bringing up this concern. To clarify, $FairFront(\alpha)$ measures the maximal *achieved accuracy* of the model, given that its discrimination violation is upper bounded by $\alpha$.
Mathematically,
$$FairFront(\alpha_{SP}, …) := \max_{h} Accuracy(h)\ s.t. SP(h) \leq \alpha_{SP}, ...$$
$FairFront_{k}$ is an upper bound approximation of $FairFront$ using $k$-piecewise linear functions. We will clarify this in the revised paper.
---
**Q5. Scalability with the number of sensitive groups.**
A5. As the number of protected groups, denoted by $A$, increases, both the number of variables in the convex program and the DC program in Alg. 1 increase linearly with $A$. Note that the running time of standard convex/DC program solvers is only mildly dependent on the number of variables (see e.g., the dimension-indep. convergence bounds in Cor. 4.1 in Abbaszadehpeivasti et al. 2023 and Thm. 5 in Faust et al. 2023). In our experiments, we also observed that even when doubling the number of groups and labels, 20 iterations consistently suffice for our algorithms to converge.
–Abbaszadehpeivasti etal., 2023. On the rate of convergence of the difference-of-convex algorithm (DCA).
–Faust etal., 2023. A Bregman Divergence View on the Difference-of-Convex Algorithm.
---
**Q6. Provide run time measurements for the case studies**
A6. Below, we present the runtime of Alg. 1 across various datasets of different scales. All experiments were run on a personal computer with 10 CPU cores and 16GB memory. We did not optimize our Python implementation (e.g., using GPUs) so the run time could be further reduced.
German credit (1000 rows, 21 features): 0.53 mins
COMPAS (5278 rows, 7 features): 1.73 mins
Adult (46447 rows, 8 features): 6.11 mins
HSLS (10937 rows, 9 features): 12.33 mins
---
**Q7. The limitations induced by approximating g**
A7. Please refer to A1.
---
**Q8. Not discuss any limitations of their approach.**
A8. Thank you for sharing your thoughtful insights and providing the associated references. Please refer to A2 for a detailed discussion regarding limitations.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed and informative responses to my questions. In light of these, and also considering the other reviews, rebuttals, and discussions, I am updating my score to Strong Accept.
One final remark concerning additional related work: it could also be interesting to mention the recent branch of the literature discussing omnipredictors. E.g., Globus-Harris et al. (2022) and Hu et al. (2023) discuss how and under which conditions Bayes-optimal fair classifiers can be derived using simple post-processing techniques from multicalibrated regressors.
https://arxiv.org/pdf/2209.07312.pdf
https://proceedings.mlr.press/v202/hu23b.html
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your response and providing further related literature – indeed, this line of research is very relevant! We will expand the first paragraph of our Related Work section to highlight the work on omnipredictors and multicalibration, including the ones you mentioned, and their connection with Bayes-optimal fair classifiers. In our final section, we will also point to the burgeoning literature on omnipredictors and multicalibration as a source of new strategies for producing fair classifiers with theoretically-backed performance guarantees. | null | null | null | null | null | null |
Block-State Transformers | Accept (poster) | Summary: The authors present a new long-range transformer architecture by incorporating SSMs. This novel model outperforms several established baselines, such as Transformer XL, Block Recurrent Transformer, and Sliding Window Transformer, in terms of cost-effectiveness trade-off for tasks involving long-document or code modeling.
Strengths: 1) Well-motivated. Long-range modeling is becoming increasingly important for LLM community.
2) Good results on language modeling (PG19, arXiv, Github)
3) The writing is clear and effectively conveys the ideas and findings.
4) The model design is intuitive and well-reasoned. The inclusion of both local full attention and linear components to handle long sequences is a sensible approach.
Weaknesses: 1) The scale is too small. For language modeling, based on the success of LLM, we always expect good scalability. This paper only conduct experiments up to 380M params, which are far from many emergent abilities threshold. When scaling up, many inductive bias would become useless[1].
2) More insightful experiments beyond language modeling are required. For instance, as the authors mentioned in the limitation section, what about the results on Long Range Arena?
3) Any case study about how this model captures long-range dependency? Why this model is indeed better? It seems that putting one efficient attention layer with linear or sub-linear complexity before self-attention should work similarly.
4) Some strong baselines like CoLT5[2] are missing.
[1] Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?
[2] CoLT5: Faster Long-Range Transformers with Conditional Computation
I enjoyed reading this idea but I believe the missed experiments above, especially (1), would be highly desirable. Without a set of experiments about scaling the model up, I cannot agree this paper is useful enough.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1) Did the authors observe any instable training process? And again, would it become instable when scaling up?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We have taken your comments and concerns into careful consideration and conducted additional experiments to address them. These experiments have been included in the **1-page PDF**, focusing on scaling aspects and performance in areas beyond language tasks, namely experiments that assess long-range modeling capabilities and scaling properties of BST. We would also like to draw the reviewer’s attention to the results in **Appendix C** which also provide insights into the scaling properties of our model.
---
> "The scale is too small."
> "This paper only conduct experiments up to 380M params."
> "When scaling up, many inductive bias would become useless[1]."
We acknowledge the growing significance of scalability in language models, and in response, we have incorporated additional scaling experiments in relation to the number of parameters in **Figure 1 of 1-page PDF**, where we scale BST from 80M to 1.3B parameters. We show a 0.1% relative perplexity improvement at 80M parameters, to a 3.8% relative perplexity improvement at 1.3B parameters over an equivalently large Block-Recurrent Transformer (BRT). These experiments demonstrate favorable scaling properties of our proposed model on perplexity. However in Table 1 of the paper all models have the same number of trainable parameters for a fair comparison.
Ablation Studies in **Appendix C**, also suggest favorable scaling properties of our model. We provide experiments showing that performance uniformly improves as we add more contextualizing layers, i.e. BST layers, into the architecture. Moreover, an equally significant alternative scaling property pertains to the length of the sequence, and our model plays a direct role in addressing and contributing to this aspect, especially at inference time. See **Appendix B** in the supplementary material, on Evaluating Length Generalization capabilities.
---
> "what about the results on Long Range Arena?"
> "Any case study about how this model captures long-range dependency? Why this model is indeed better?"
We thank the reviewer for suggesting experiments on Long-Range Arena (LRA), the results of which can be found in **Table 2 of the 1-page PDF**. Because the model’s ability to model long-range dependencies is not the only factor that influences perplexity, this experiment was necessary to directly demonstrate that our model captures long range dependencies better than the baseline (BRT). Further a fair comparison with Mega-chunk shows that we surpass on 4 out 6 tasks and on average the latest and strongest “chunked input” baseline on LRA.
---
> "It seems that putting one efficient attention layer with linear or sub-linear complexity before self-attention should work similarly."
As seen in Table 2 of the 1-page PDF, SSMs such as S4D, S4 and S5 are 30%-35% points higher on Long-Range Arena (LRA) compared Transformers with linear and sublinear attention (Linear Transformer, Performer, BigBird). It is therefore unlikely that we would gain any benefit from replacing the BST or SSM layers in hybrid models with linear or sub-linear attention layers.
Further, transformer memories are fundamentally limited by their context length - $L$ (with $L^2$ complexity), whereas SSMs (like RNNs) can encode information indefinitely in their latent states. For this reason SSMs strongly outperform (linear) Transformers on long-range modeling tasks. Moreover, at inference time, Transformers completely lack the ability to generalize to unseen sequence lengths during training, unlike structured SSMs (See **Appendix B**).
---
> "Some strong baselines like CoLT5[2] are missing."
We were not aware of this baseline at the time of writing and submitting the paper. According to the arXiv publication date, this paper was made public on _March 17th_, which indicates that it is concurrent work that emerged during a similar timeframe as our research. This new work does not report perplexity on our targeted tasks nor LRA. We have cited CoLT5 and will nonetheless attempt to replicate and test CoLT5 on LRA and PG-19.
---
> "Did the authors observe any unstable training process? And again, would it become instable when scaling up?"
No, the model trains satably without the need of additional tricks.
---
Rebuttal Comment 1.1:
Comment: Thanks for the additional experiments. The rebuttal solved most of my concerns well. Please do not forget to add these results to your paper. I do believe these experiments can greatly improve this paper. I have raised my score to 6. Good Luck! | Summary: This paper focuses on combining two efficient techniques for long-range modeling: state-space models (global contextualization) and block-recurrent transformers (local contextualization). In particular, they propose two different approaches, the first uses SSMs to output contexts for multiple heads (multi-head), and the second concatenates the last entries from the previous window to form a combined context state (multi-filter). Evaluation is performed on three language modeling datasets that outperform block-recurrent transformers in perplexity and is much faster when compared layer-wise.
Strengths: S1. Exploring different ways to combine SSMs and block-recurrent models to improve efficiency is a compelling direction. SSMs offer a parallelizable way to capture long-term information and avoid sequential computation in block-recurrent models. The results of this study should be of interest to researchers that study architectures that capture local and global information.
S2. The evaluation even though it focuses mainly on comparison with block-recurrent models and SSMs on three language modeling tasks, it is thoroughly described and well-executed.
Weaknesses: W1. Even though the paper is mainly empirically driven, the delivery lacks a comprehensive and diverse set of evaluations to demonstrate the effectiveness and limitations of the method.
W2. Experiments are targetting language modeling on three tasks but there is no experiment that measures the long-range capabilities of the model. There are several long-context classification benchmarks that the authors can use in addition to language modeling: LRA [1], MuLD [2], and CAB [3].
W3. The method design makes specific assumptions about the hardware to be employed and bases its evaluation on it; e.g. efficiency comparisons are made per layer. It's not explored to what extent the benefit remains when comparing training time/speed vs quality for the whole model and evaluation is performed on typical accelerators. That reduces the practical impact in my view.
W4. A study regarding the behavior of the model when increasing the model size is missing. Scaling aspects are important to consider when making claims about outperforming transformers.
[1] https://arxiv.org/pdf/2011.04006.pdf
[2] https://arxiv.org/pdf/2202.07362.pdf
[3] https://arxiv.org/pdf/2210.07661.pdf
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Q1: Evaluating methods that model long context typically involves more tasks than language modeling. What is the level of confidence that proposed models perform to other tasks such as long context classification and text generation?
Q2: It would be interesting to measure the time it takes for every method to converge even within a fixed time budget. Have all methods converged in the fixed training budget in Table 1 and do they have any differences worth discussing?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Discussion about the limitations of the proposed method would be useful, I'd suggest talking about scaling behavior, performance on conventional accelerators, and generalizability to long-context tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback. In response to your comments and concerns, we have conducted additional experiments, which we have included in the **1-page PDF**. These experiments address the points you have raised and also explore areas that you alluded to in your review, namely scaling aspects and performance on Long-Range Arena (LRA), outside the language modeling domain.
---
#### Re W1 and W2: Extending to areas other than language/perplexity
As we acknowledged in the limitations section (Appendix D) of our submission, we recognize the significance of conducting experiments to assess the long-range capabilities of our proposed model. These new experiments are designed to test if the improved performance on language modeling can be attributed to our model’s ability to infer long-range dependencies. To test this hypothesis rigorously, we have performed comparisons between variants of our proposed model, BST, and its recurrent predecessor, BRT, as well as several other baselines that utilize SSMs. We use the Long-Range Arena (LRA) benchmark as the primary testbed. Kindly refer to **Table 1 of the 1-page PDF**. The results demonstrate that, in line with our expectations, SSMs enhance long-range modeling capabilities compared to BRT.
---
#### Re W3: Efficiency of our model
In the interest of convenience while maintaining an exploratory approach and ensuring a fair comparison to former baselines on the dataset, we intentionally chose not to utilize model parallelism, despite the fact that our model is well-suited for it. Unlike BRT, whose recurrent layers cannot be parallelized, our model’s per-layer speedups (Figure 3) will trivially translate to full model speedups provided that there is sufficient compute resources to support parallelism. **Appendix C, Table 3** shows that replacing transformer layers with BST layers monotonically improves perplexity performance. This evidence, combined with the per-layer speedup results, demonstrate the performance and speedup potential of our full model at scale.
---
#### Re W4: Scaling capabilities
Please see the additional scaling experiments in the **1-Page PDF, Figure 1**. Scaling experiments relate parameter count (80M to 1.3B) to the perplexity performance. We show a 0.1% relative perplexity improvement at 80M parameters to a 3.8% relative perplexity improvement at 1.3B parameters over an equivalently large Block-Recurrent Transformer (BRT) model. Additionally, ablation studies in **Appendix C Tables 2-4** relate the placement, capacity, and the number of the BST layers to performance.
---
#### Re Q1: LRA
> "What is the level of confidence that proposed models perform to other tasks such as long context classification and text generation?"
We have benchmarked BST and BRT on Long-Range Arena in **Table 2 of the 1-page PDF**. Results show that BST is indeed better at capturing long-range relations, which aligns with previous results (e.g. S4 vs Transformers) and our initial motivating intuition.
---
#### Re Q2: Training/Convergence Time
> "Have all methods converged in the fixed training budget in Table 1 and do they have any differences worth discussing?"
This is an interesting question. Our experiments show that BRT and BST both had room to continue improving and our validation set perplexity continued to diminish on all datasets. While our experimental setups were designed to align with prior works (GSS & BRT), this is a reasonable question and we are well positioned to include these additional experiments in the camera-ready version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. The replies and additional experiments address my main concerns. To reflect this, I increased my score. | Summary: State space models (SSMs) perform well on modeling long-range dependencies with good efficiency scaling, but on language modeling, transformers still outperforms SSMs. This paper tries to combine the best of both worlds and proposes a hybrid model, Block-State Transformer, which combine SSMs’ capacity on long range modeling and Transformer’s ability on modeling local context. The input sequence is split to multiple smaller segments. For each segment, transformer layer will do a self attention on this token embeddings and cross attention to the output of SSMs. Their experiments show that on language modeling, their approach achieve reasonable speedup with comparable performance to Transformers.
Strengths: 1. The proposed combination of SSMs and transformers allow the model to exploit advantages of two powerful methods while avoiding their drawbacks.
2. The SSMs used in the proposed method can be swapped to different SSMs making it possible to enjoy the advancement on SSMs field.
3. The proposed method give similar performance on language modeling compared to Transformers.
Weaknesses: 1. There was already existing work on SSMs that achieve similar performance on language modeling (https://arxiv.org/abs/2212.14052) compared to Transformers. The authors should include a discussion and comparison to the relevant work.
2. The evaluations are performed on language modeling for 4096 length sequences. On this setting, there are a lot of strong transformer baselines with efficient self-attention designs. It would be good if the authors can provide an empirical comparison with these baselines.
3. Perplexity is only one indicator of how the language models performance. To get more precise understanding of performance, it would be good to include a comparison on downstream tasks.
4. Code is not available.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The main concerns are listed above. There are two more questions:
1. On line 235, the 6,966,499 English language words seems to be a typo according to dataset statistics on https://github.com/deepmind/pg19.
2. What are the latency of one forward step and what about memory consumption?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your generally positive review. We have taken your valuable feedback into account to improve our current version of the paper. A more comprehensive review of related works, including H3, will be provided. Additionally, we have included a number of experiments in the **1-page PDF**, to further assess long-range modeling capabilities (LRA) and scaling properties of our model. Moreover, we have expanded our baselines on language to also include Hyena, a more recent and general framework that subsumes H3 and GSS.
Our additional experiments show:
- Large improvements on PG-19 compared to Hyena and Hybrid H3. Specifically, integrating parameterized convolutions and attention in `BST:SH:unstruct` allows our model to **surpass Hyena by 1.4 perplexity points**. See **Table 1**.
- ***+30%-35% point improvement on Long-Range Arena (LRA)*** over recent and strong Transformer baselines. See **Table 2**.
___
> W1. "There was already existing work on SSMs that achieve similar performance on language modeling (https://arxiv.org/abs/2212.14052) compared to Transformers."
We have added Hybrid-H3 and Hyena as additional baselines in the **Table 1 of the 1-Page PDF**.
Hyena Hierarchy (https://arxiv.org/pdf/2302.10866.pdf) is a language model that draws inspiration from GSS and H3, and captures both of these methods within a more generic framework. In all these approaches, attention and retrieval are conceptually simulated by element-wise multiplication of a sequence of tokens, and its corresponding contextualized (via SSM or parameterized convolutional kernels) counterpart. Hyena, outperforms both GSS and H3 on tasks such as associative recall and in language modeling, particularly on the WikiText103 dataset.
We have trained both BST and BRT models under a fixed parameter setting using the same tokenizer (GPT2) and vocab size, for a fair comparison. BST achieves SoTA performance on this dataset. Results can be found in **Table 1 of the 1-Page PDF**. We hope this addresses your concern regarding benchmarking SSM-inspired language models.
---
> W2. "The evaluations are performed on language modeling for 4096 length sequences. On this setting, there are a lot of strong transformer baselines with efficient self-attention designs."
To the best of our knowledge, GSS and BST were the state-of-the-art on the datasets we use in our work, outperforming other efficient Transformers, e.g. linear implementation etc. More importantly, other Transformer-based architectures do not generalize to sequence lengths not seen during training, whereas our method does (See **Figure 4 in Appendix B**). Even with relative positional embeddings, Transformers cannot reliably go beyond 3x the trained sequence length (see https://arxiv.org/abs/2305.19466), let alone 16x (at a sequence length of 65K for example). That being said, if the reviewer can point us to specific works, we are well-positioned to include additional baselines in the camera-ready version of the paper. Further, as seen in **Table 2 of the 1-Page PDF**, our method greatly outperforms very recent and efficient Transformers such as Linear Transformer, Reformer, Performer and BigBird.
---
> W3. "Perplexity is only one indicator of how the language models performance."
Although there is no unanimous consensus, most practitioners in the field generally agree that the performance on downstream tasks seems to be well correlated with perplexity for LLMs (https://arxiv.org/pdf/2210.14199.pdf, https://aclanthology.org/2021.emnlp-main.478.pdf). Like other works in this field, we focus on developing decoder-only models that achieve lower perplexity. Nevertheless, we acknowledge that this is an important step in developing large language models and believe BST can also serve as a powerful encoder model which may be evaluated on downstream tasks. Because BST uses off-the-shelf transformer models augmented with SSM states as context, our approach can be easily adapted into existing LLM codebases. While no one has evaluated SSM or hybrid-SSM pretraining performance on downstream tasks yet, we look forward to doing that with BST in a follow-up project.
---
> W4. "Code is not available."
As part of our ongoing work, we are working closely with the maintainers of the Block-Recurrent Transformer codebase to integrate our implementation of BST into the repository (https://github.com/google-research/meliad). In the meantime, we have provided the JAX pseudo-code for all of our variants in **Appendix E** in the supplementary materials.
---
> Q1. "On line 235, the 6,966,499 English language words seems to be a typo..."
Thank you for pointing this out. We have fixed this error in the latest version.
---
> Q2. "What are the latency of one forward step and what about memory consumption?"
The computational and space complexity of a BST layer consists of that of the transformer blocks and the SSM sublayer. We discuss this topic in more detail in the Efficiency section of the paper. In the left-side plot of Figure 3 in the paper, the y-axis represents the latency for a forward pass of our model when executed on an NVIDIA V100 GPU.
We assume that your question refers to the auto-regressive “forward step” at test time. The exact forward step latency may vary slightly depending on the BST variant used. In Single-Head and Multi-Head variants, every token generation step is immediately followed by feeding that token back into the SSM and adding it to the context in $\mathcal{O}(1)$ operations – via the RNN view of SSMs. On the other hand, when the Multi-Filter variant is used, contextualizing can be postponed to when all the tokens corresponding to the current block are generated. The new block of tokens are then added to the context in one go, using the recurrent view of SSMs in $\mathcal{O}(W)$ operations. The token generation done within the Transformer is similar to BRT, $\mathcal{O}(W)$, since every token attends to $W$ previous tokens.
---
Rebuttal Comment 1.1:
Comment: Thank you for the efforts on rebuttal. The response addressed my concerns. I will maintain my score.
---
Reply to Comment 1.1.1:
Title: Thank you for reading our response
Comment: We really appreciate your reply.
As you mentioned, the response addresses your concerns and this includes:
- Adding more explanations on prior work such as H3, and Hyena.
- Showing that BST outperforms other efficient transformers on Long Range Arena (see *Table 2 of the 1-Page PDF*).
- Detailing memory consumption at inference in $\mathcal{O}(1)$ for single head BST.
We have added additional experiments as well that we hope will make our submission even stronger.
If you feel that your concerns have been adequately addressed, is there any specific feedback that we should discuss to improve our submission and increase our score? | Summary: This paper proposes block-state transformers, a method to combine state space models with transformers for language modeling. The paper evaluates block-state transformers on PG19 and arxiv math and finds promising results.
Strengths: Combining state space models and Transformers is an interesting idea worth exploring. The presentation of the paper is clear. The explanation of state space models, which can be quite complex, is very clear. The evaluation hits the right notes in terms of the major questions to ask.
Weaknesses: The evaluation is missing many recent works combining state space models and attention in various ways. The claim that SSMs do not match Transformers on language has not been true for a while. Most of these methods were released significantly before the NeurIPS deadline and are critical to compare against for evaluation.
* Mega [1] combines attention and state space models.
* BiGS [2] is a new SSM-based architecture that matches Transformers in language.
* H3 [3] combines SSMs and attention in alternate layers.
* Hyena [4] removes attention completely and replaces it with a convolution-based layer (similar to an SSM).
Confusingly, many of these works are cited in the paper - and ideas from the papers are used extensively in the methods proposed (e.g., "BST:{SH,MF}" uses the structure from H3 and Hyena without comparing against those architectures as baselines). Using the ideas from these papers without comparing against them makes it difficult to understand how this method compares against previous methods and where the performance improvement comes from.
Performance is also hard to evaluate compared to standard models such as Transformers (GPT-Neo [5], Pythia suite [6]). TransformerXL is an older model that is not trained as well as modern Transformer-based LLMs.
[1] https://arxiv.org/abs/2209.10655
[2] https://arxiv.org/abs/2212.10544
[3] https://arxiv.org/abs/2212.14052
[4] https://arxiv.org/abs/2302.10866
[5] https://github.com/EleutherAI/gpt-neo
[6] https://github.com/EleutherAI/pythia
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: How does BST compare to the architectures listed in the weaknesses section? It is important to compare against the original architectures that inspired components of BST, as well as modern standard Transformers.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper would be stronger with more discussion of limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We hope that we have addressed most of your concerns with the additional experiments and comparisons in the **1-page PDF**.
Our new experiments show:
1. Large improvements on PG-19 compared to Hyena and Hybrid-H3. Specifically, integrating Hyena and attention in `BST:SH:unstruct` allows our model to surpass standalone Hyena ***by ~1.4 perplexity points***. See **Table 1 in 1-page PDF**.
2. We show that we surpass all Transformer variants that you have mentioned and other methods that chunk inputs (allowing similar speed-ups to our BST) such as Mega-chunk. Specifically, on Long-Range Arena (LRA), `BST:SH:S4` performs better than:
- S4 on 5 out of 6 tasks and on average by 0.9% points
- Mega-chunk on 4 out of 6 tasks and on average by 1.3% points
Furthermore, we respond to your specific questions and comments below.
---
> “Mega [1] combines attention and state space models.”
Thanks for bringing Mega to our attention, we will review and cite it in our related works section.
We compare our model to Mega on Long-Range Arena (LRA), where we see that our model achieves comparable performance to Mega/Mega-chunk. However on language modeling, according to Table 1 in [1], **Mega** (252M) achieves _18.07_ perplexity on WikiText103 that is roughly the same performance, _18.50_ perplexity, achieved by **Hybrid-H3** [3] (125M, half the size of Mega). ***`BST:SH:unstruct` outperforms Hybrid-H3 by 3.0 perplexity points*** on the PG-19 long text language modeling benchmark, see **Table 1 in the 1-page PDF**. Therefore, although we were unable to directly compare our model against Mega on Language Modeling in the rebuttal phase, we have reason to believe that our model will outperform Mega when using the same number of parameters.
---
> "BiGS [2] is a new SSM-based architecture that matches Transformers in language."
Regarding BiGS, we consider this work to be less relevant compared to the other papers mentioned. The reason is that BiGS is designed as a bidirectional model, making it more suitable for serving as an encoder or for natural language understanding tasks, rather than auto-regressive language modeling or language generation tasks. In contrast, our focus is on decoder-only language models, which have different requirements and objectives. Further, BiGS is not evaluated on Long-Range Arena which makes it difficult to compare against.
---
> "H3 [3] combines SSMs and attention in alternate layers."
> "Hyena [4] removes attention completely and replaces it with a convolution-based layer (similar to an SSM)."
We have compared BST to H3-Hybrid and Hyena on PG-19, results can be found in **Table 1 of the 1-page PDF**. We have trained both BST and BRT models under a fixed parameter setting using the same tokenizer (GPT2) and vocab size, for a fair comparison. BST remains to be state-of-the-art at this scale. Specifically, integrating Hyena and attention in `BST:SH:unstruct` allows our model to surpass standalone Hyena by ~2 perplexity points.
---
> "Confusingly, many of these works are cited in the paper - and ideas from the papers are used extensively in the methods proposed (e.g., "BST:{SH,MF}" uses the structure from H3 and Hyena without comparing against those architectures as baselines)."
Please see **Table 1 in 1-page PDF** for a direct comparison against H3 and Hyena where our model outperforms both under a fixed parameter count setting.
---
> "Performance is also hard to evaluate compared to standard models such as Transformers (GPT-Neo [5], Pythia suite [6])."
As we opted to implement our models using JAX, we were unable to utilize other open-source PyTorch-based codebases like GPT-Neo and Pythia by the rebuttal deadline. However, we should have such comparisons before the camera-ready deadline. It is crucial to mention that any enhancements made to the attention layer are independent/orthogonal to our model which focuses on enabling off-the-shelf transformer-attention layers to model long-range dependencies.
Furthermore, we want to note that since GPT-Neo and Pythia employ pipeline parallelism, we anticipate similar speed improvements when applying SSMs since they can be parallelized in a similar way. As part of our ongoing work, we are actively developing a PyTorch implementation for BST and BRT. These implementations will enable us to reliably compare performance with other PyTorch models and explore their potential benefits in conjunction with our proposed method. Please note that we do evaluate performance against linear transformer, reformer, performer and BigBird in **Table 2 of 1-pagePDF**.
---
Rebuttal Comment 1.1:
Comment: Thank you for the extensive rebuttal and extra experiments. I will be raising my score to a 5.
For the experiments, one way they can be improved: Hyena and H3-Hybrid were trained on the Pile as their primary evaluation, and report 5B, 10B, and 15B-token experiments. It would be helpful to compare against those to ensure a fair comparison.
---
Reply to Comment 1.1.1:
Comment: Thank you for responding to our rebuttal and increasing your score.
Hyena and H3 have indeed different results on the PILE which are not directly comparable since the experimental set-up was different in each paper. For example, Hyena experiments are up to 15B tokens and H3 experiments are with 400B tokens only. Therefore, we cannot directly compare Hybrid-H3 and Hyena results on PILE from their papers.
However, Hyena and Hybrid-H3 experiments on PG-19 are equivalent and we have demonstrated superior BST performance (see Table 1 in 1-page PDF). Further, with Wikitext103, we can also fairly compare Hyena and Hybrid-H3. From Table 4.3 of the Hyena paper (https://arxiv.org/abs/2302.10866), we see that the perplexity of Transformer and Hyena are on-par and that Hybrid-H3 improves over the Transformer baseline by only 0.5%.
However, within our experiments (see Table 1 of our paper), we find generally that: BRT improves over the Transformer baseline by 2.1% (average over PG-19, GitHub and arXiv).
We find the performance improvement to be much larger between BST and the Transformer baseline compared to either Hyena and Hybrid-H3.
Does this answer your question, and resolve your remaining concerns? | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their insightful comments. We believe that we addressed the vast majority of reviewer’s concerns by conducting additional experiments, found in the attached **1-page PDF attached to this message**, and responding to reviewer’s individual questions. Our additional experiments include:
#### **Scaling Properties of Block-State Transformer (BST)** [_Figure 1_]
- Showing the competitive scaling properties of Block-State Transformer (BST), going from a 0.1% relative perplexity improvement at 80M parameters to a ***3.8% relative perplexity improvement at 1.3B parameters*** over an equivalently large Block-Recurrent Transformer (BRT).
#### **Comparisons to Hybrid-H3 and Hyena** [_Table 1_]
- Demonstrating that our model achieves ***superior perplexity*** compared to Hyna & Hybrid:H3, respectively by 1.4 and 3.0 points, under standard evaluation conditions (fixed parameter count setting).
#### **Comparisons to Transformer variants and Mega on Long-Range Arena (LRA)** [_Table 2_]
- Achieving a substantial lead over the Block-Recurrent Transformer (BRT). Further, BST outperforms Transformer variants **by 30%-35% points on average**. Further, in a fair comparison with other methods that chunk input sequences, we see that BST outperforms Mega-chunk **on 4 out of 6 tasks** and by 1.3% point on average.
Our strong results on Long-Range Arena (LRA) demonstrate that BST significantly outperforms BRT on LRA, providing further evidence that the gains on language modeling tasks can indeed be attributed to our model’s ability to capture long-range dependencies more effectively while improving computational efficiency.
---
We have included the responses to the reviewers in their rebuttal sections individually. We would like to thank all reviewers again for their time and attention.
---
Find the PDF containing additional results below.
Pdf: /pdf/f7a188116d01aaffb01009695bdf5db53fef7db0.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors present a novel architectural framework called the Block-State Transformer (BST), which integrates State Space models and Block-Recurrent Transformers to create a competitive autoregressive language model capable of effectively processing lengthy sequences. The input sequence is passed through a State Space model like S4, and the output of which is later in Block-recurrent Transformer as a replacement of the recurrent state vectors. To obtain the final output, the input embeddings are divided into fixed-sized windows and processed in parallel by a series of Block-Recurrent Transformers. Due to the usage of the S4 output as recurrent state vectors within the Block-Recurrent Transformers, the absence of recurrence allows for parallel computation. The authors propose three distinct integration modes, which differ in terms of how the S4 output is integrated within the recurrent state vectors. To evaluate the performance of BST, the authors compare it against four baseline models: Transformer-XL, Slide, Block-Recurrent Transformer, and a hybrid Gated State Space model. The comparison is conducted across three diverse datasets, namely PG19, arXiv Math, and GitHub. BST demonstrates slight perplexity improvements in the PG19 and GitHub datasets. Additionally, the authors present ablation studies on various parameters, including SSM layer placement, the number of SMM layers, and the state dimensionality of SMM.
Strengths: + **Strong Presented Results**: Authors presented results on competitive benchmarks across reasonable prior baselines such as Transformer-XL, Slide, Block-Recurrent Transformers etc. and do outperform them across several tasks.
+ **Computational efficiency**: The proposed method is able to provide a huge improvement in terms of computational efficiency over models like Block-Recurrent Transformers by parallelization.
+ **Interesting combination of prior ideas**: By using the parallelizable nature of the SSMs the authors were able to introduce parallelization to Block-Recurrent Transformers thereby achieving computation efficiency.
Weaknesses: - **Incomplete Related works**: The authors' treatment of related works, particularly in the context of models combining Transformers and S4 appears to be lacking. It would have been beneficial for the authors to provide a more comprehensive discussion on existing models that incorporate both Transformers and S4. This would have allowed for a deeper exploration of the advancements and limitations of these models, highlighting the unique contributions of the proposed Block-State Transformer (BST). In addition to that, there could be more details on SSM development in related works since S4, S5, and S4D were mentioned in the paper later.
- **Missing Preliminaries on S4/S5** : A more detailed description of the S4 models within the method section would have been beneficial, particularly regarding the computation of the kernel since the complexity of the S4 model is the major part of the BST model complexity. Specifically, the computation of the kernel is not trivial if you want to keep overall L log(L) complexity and it relies on the form of the A and B matrices. A more comprehensive exposition regarding the computational aspects of the S4 models is deemed necessary for a thorough understanding of the subject matter.
- **Additional benchmarking** : As the authors themselves admit in the limitations section, there are further results required, especially on well benchmarked domains such as the Long Range Arena and also other long-term datasets to provide convincing evidence of BST performance.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Broadly, understanding BST capabilities in other settings would help enrich the paper results. See weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your valuable and constructive comments. We agree that Block-State Transformer is a novel architecture that shows strong results and computational efficiency. We think that we have only scratched the surface of possibilities with this interesting combination of ideas. We have conducted more experiments that can be found in the **1-page PDF** (see general comments to access PDF), some of which are related to your review, namely benchmarking BRT and BST on Long-Range Arena. Additional experiments also include scaling results and more baselines on language tasks. We believe that these supplementary results address most of your feedback on proving that we can harness the strengths of S4 and Transformer in long-range classification and language modeling tasks.
___
#### Extending Related works
We have indeed included a more comprehensive review of related work in the latest version of our paper, while also emphasizing the key differences. Our discussion will include S4 kernel computation, GSS-Hybrid, Hybrid-H3, and Mega. We have already outlined some of the core contributions in [response to Reviewer 9Xt3](https://openreview.net/forum?id=XRTxIBs2eu¬eId=wjrwjedtwd), and we invite you to review them for further insights.
---
#### Covering Preliminaries
We have incorporated your feedback by expanding the discussion in the efficiency section in the latest version of the paper. To summarize, the efficiency of our models is largely afforded by standard SSM implementations, which are due to:
* Imposing Diagonal Plus Low-Rank (DPLR) structure on the $A$, $B$, and $C$ matrices, rematerializing the convolutional kernel can be carried out in tandem with the convolution operation with $\mathcal{O}(L \log L)$ complexity, as described in S4.
* By replacing RNNs (BRT) with SSMs (BST), BST transformer blocks can run in parallel instead of sequentially as in BRT.
Further, by demonstrating two vastly different SSM as means of contextualization, we show that our proposed architecture/framework remains agnostic to the specific type of SSM used, making the advancement of SSMs orthogonal to our work. That being said, and as highlighted in your review, since the computational efficiency of BST layers _can be_ dictated by the SSM, it is worth discussing them in more detail.
---
#### Additional benchmarking
Please see **Table 2 in the 1-page PDF** for additional experiments on Long-Range Arena (LRA).
Our motivating hypothesis was that replacing recurrence with a more efficient yet powerful component for capturing long-range dependencies would improve performance on language modeling tasks. As we openly acknowledged in the limitations section, we recognized the need for more concrete evidence to support this hypothesis. Consequently, in response to your request, we conducted additional experiments on Long-Range Arena (LRA) (please see **Table 2 of 1-page PDF**), demonstrating that BST indeed surpasses BRT (and other methods that chunk inputs such as Mega-chunk) in capturing long-range dependencies.
---
Thank you for reading our response! | null | null | null | null | null | null |
Mirror Diffusion Models for Constrained and Watermarked Generation | Accept (poster) | Summary: This paper proposes a new class of diffusion models called the Mirror Diffusion Model (MDM), which confines the generation to a constrained convex set. The MDM transforms the generation from a constrained original space to an unconstrained dual space. With this transformation, MDM can be trained and sampled like the unconstrained Euclidean-space diffusion models, such as DDPM, with an additional step mapping back to the original space. In experiments, the authors show that the general quality of MDM on constrained sets outperforms the reflected diffusion models in terms of quality and efficiency. Furthermore, the authors demonstrate an important application of MDM in watermark generation.
Strengths: (S1) This paper proposes a new class of diffusion models that confine the generation to a constrained convex set. It achieves this by transforming the generation from the constrained original space to an unconstrained dual space, enabling efficient training and sampling similar to unconstrained diffusion models.
(S2) The mechanism relies on a strongly convex function defined on the convex set. This function's gradient space spans $R^d$, and the gradient norm approaches infinity near the boundary of the convex set. The author provides clear instructions on how to design such a convex function for various shapes, including the $\ell_2$-ball, simplex, and general polytopes.
(S3) The experiments demonstrate that the proposed model outperforms reflected diffusion models in terms of generation quality and efficiency. It also achieves generation quality comparable to that of unconstrained DDPM without violating any constraints.
Weaknesses: (W1) From my understanding, the proposed model requires the design of a strongly convex function whose gradient maps the constrained convex set to the entire $R^d$ space, enabling the application of an unconstrained diffusion model in the dual space. However, it is not immediately apparent to me how the authors ensure that this condition holds for the strongly convex functions they provide.
(W2) It would indeed be beneficial to observe experiments conducted on more realistic datasets, such as CIFAR-100 and ImageNet.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: (Q1) Can the authors provide the proofs that gradient of the strongly convex function maps the constrained convex set to the whole $R^d$?
(Q2) Can the authors provide more experimental results on more realistic data, like CIFAR-100 and ImageNet?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Conditions when $\nabla\phi(\mathcal{M})= \mathbb{R}^d$**
- We first note that the gradient map of a strictly convex function $\phi$ needs *not* span $\mathbb{R}^d$, unless additional conditions are satisfied. For mirror maps, we follow the literature (e.g., [1,2]) and require $\phi$ to be additionally *(i)* of Legendre type [1] (i.e., $\lim_{x\rightarrow\partial\mathcal{M}}\|\nabla\phi(x)\|\rightarrow \infty$, see L118 Sec 3.1) and *(ii)* continuous differentiable (see L117 Sec 3.1). When $\phi$ also satisfies these two conditions, its gradient map will be surjective with range $\mathbb{R}^d$.
- The above surjectivity statement follows from convex analysis (e.g., [1]). Here is how we understood it intuitively. For gaining insight consider a 1D case. Since $\phi$ is strictly convex, its derivative is strictly increasing (see, e.g., [1,3]). Then, the above two conditions ensure that the derivative not only approach $(-\infty,\infty)$ at the boundary $\partial\mathcal{M}$, but is also continuous in its domain (i.e., there exists no hole or jump). Hence, the gradient map has range $\mathbb{R}$.
- It is, however, a valid and in fact critical question, whether such requirements can be satisfied for any convex constraint set (i.e. the mirror map approach works). Unfortunately, to the best of our knowledge, how to explicitly construct a mirror map (with both $\nabla \phi$ and $(\nabla \phi)^{-1}$ analytically given) for an arbitrary convex set is still an open problem, although existence is less an issue.
---
**2. Large-scale experiments on watermarking ImageNet 256x256**
- We appreciate the reviewer's comment. In the **PDF attached above in our response to all reviewers**, we present additional results of our MDMs on large-scale image datasets, specifically ImageNet 256x256, for both conditional and unconditional watermarked generation. For conditional generation, we focus on image restoration tasks, generating clean, watermarked, images conditioned on degraded inputs, using both MDM-dual and MDM-proj. For unconditional generation, we include mainly MDM-proj due to time constraints during rebuttal. Both MDM-proj and MDM-dual consider a polytope constraint set whose parameters are chosen such that the watermark yields high precision (>95%) and low false positive rate (< 0.001%). Specifically, we set $m$=100, $b$=1.2, $c$=-1.2, and $a_i \in \mathbb{R}^{196608}$ orthogonal Gaussian random vectors. Similar to Sec 5.2, we initialize networks with pretrained checkpoints [4,5].
- Our qualitative results suggest that both MDM-dual and MDM-proj scale to high-dimensional applications and are capable of embedding invisible watermarks in high-resolution images. Note that all non-MDM-generated images, despite being indistinguishable, actually violate the polytope constraint, whereas MDM-generated images always satisfy the constraint. These results highlight the scalability of our MDM for large-scale, high-dimensional applications. We will release our codes upon publication.
---
[1] Convex analysis. (Rockafellar 1970)
[2] The Mirror Langevin Algorithm Converges with Vanishing Bias
[3] https://math.stackexchange.com/questions/999550/strictly-convex-if-and-only-if-derivative-strictly-increasing
[4] https://github.com/openai/guided-diffusion
[5] https://github.com/NVlabs/I2SB
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' comprehensive explanations. I will certainly factor these points into my deliberations during the discussion with the AC. | Summary: When the data distribution is constrained in some boundary,
This paper introduced Mirror Diffusion Models (MDM), where the diffusion process runs not in the distribution of the (constrained) primal space, but in the distribution of the (unconstrained) dual space. For constrained datasets such as simplex, polytopes, and balls. While the primal space is kept constrained, the dual space is unconstrained and the standard Gaussian diffusion process can be used without any restrictions. When adequate dual functions and mirror maps are given, this paper achieved better sampling quality when generating from a constrained set, both in small-dimensional synthetic datasets and real-world datasets. And this offers competitive results in generating watermarked images.
Strengths: * The paper has good clarity in introducing theorems, examples, and applications.
* To the best of our knowledge, this is the first work that necessitates and uses a primal-dual algorithm in the diffusion model literature. Given well-defined dual functions and mirror maps, the unconstrained diffusion model can be used for generating constrained datasets. The experiments with synthetic datasets (Dirichlet distribution) provides evidence to use MDM in categorical distribution, and competitive results in watermarked distribution provides further direction in privacy issues in diffusion-based generative models.
Weaknesses: * The performance gap between MDM-proj and MDM-dual is not narrowed. When the U-Net-based network for images are specialized in generating pixel-based images, the dual-space distribution can be less specialized to be sampled using the same network architectures.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * How many sampling steps did you used for generating in each dataset?
=====
Correction
* One of the xlabels in Figure 6 should be `MDM-dual`. Now both two are `MDM-proj`.
* Line 514: polytop --> polytope
* Figure 8: top and bottom --> left and right
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Performance of dual-space diffusion models**
- We first thank the reviewer for raising the comment. While we do notice a gap between `MDM-proj` and `MDM-dual` (first 2 rows in Table 6), we stress that these FID values are evaluated w.r.t. the original, *constraint-violated*, training set distribution, which differs from the dual-space, *constraint-satisfied*, distribution from which `MDM-dual` was trained. An alternative, arguably more suitable, metric for evaluating `MDM-dual` is the FID w.r.t. the dual-space distribution, which we include in the last row of Table 6 (highlighted in gray). There, `MDM-dual` behaves similarly to `MDM-proj`, exhibiting a FID-Precision trade-off. Qualitatively, Fig 6,10,11 show that `MDM-dual` is able to generate watermarked images with good quality.
- Nevertheless, whether the current parametrization, using U-net, best suits learning dual-space diffusion models is an interesting question, and we believe it will depend strongly on the choice of (polytope) constraint sets, since dual-space samples essentially change the coefficient bases, defined by the constraint set, of primal-space samples (see Eq 17). For image applications, we find that MDM can best embed invisible watermarks using high frequency $a_i$, as it preserves the semantic structure of images. Co-designing parametrization with the constraint set will be an interesting future direction to pursue, and we thank the reviewer for raising the comment.
---
**2. Typo & other clarifications**
- We thank the reviewer for the meticulous reading. There’s indeed a typo in Fig 6. The right 2 columns should be `MDM-dual` rather than `MDM-proj`. All noted typos will be fixed in the revision (kindly note that we are not allowed to revise the submission at the moment).
- Regarding sampling steps, all datasets and diffusion models in Sec 5.1 generate samples with 1000 steps, as mentioned in L239. For image datasets, FFHQ and AFHQv2, in Sec 5.2, we generate watermarked images with 79 steps, following the setup from EDM [1]. We note, however, that we did not perform extensive tuning on this hyper-parameter.
---
[1] Elucidating the design space of diffusion models
---
Rebuttal Comment 1.1:
Title: Response to the official review
Comment: Thank you to the authors for the thoughtful responses. I keep the current score. | Summary: This paper studies how to learning diffusion model when the data is in a constrained domain. The idea is to map the data into an unconstrained domain using the mirror map and conduct the diffusion on that mirror space. Once the generation finishes, map the data back to the original space.
Strengths: Using the idea of mirror map seems a novel approach.
This approach also allows for likelihood computation, which is helpful for model evaluation.
Weaknesses: The method seems to be limited to several types of special domains. What's the challenge of generalizing it into more general domain constraints?
The experiment session can be improved. For example, to evaluate the model's performance on generating the data on simplex, we can consider generating the segmentation maps. Some larger scale experiment would be helpful.
Some very related literature on learning diffusion models on constrained domains are missing [1, 2] and need to be discussed.
[1] Learning Diffusion Bridges on Constrained Domains
[2] First Hitting Diffusion Models for Generating Manifold, Graph and Categorical Data
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Discussion on [1,2]**
- We first thank the reviewer for bringing up these missing references, which are indeed relevant to our Mirror Diffusion Model (MDM). Both [1,2] and our MDM generate samples in constrained domains. While [1,2] can be applied to, e.g., discrete domains [1,2], equality constraints (e.g., sphere boundary) [2], and products of 1D bounded intervals [1], our MDMs instead focus on “**constrained sets”**, i.e., a particular class of constrained domains that are specified by “**inequality constraints**”, $\mathcal{M}:=${$x\in\mathbb{R}^d: f_i(x)<0, \forall i $}. Hence, MDM stands in parallel to [1,2]. Similar to [1,2], MDM also enjoys simulation-free training and adopts regression objectives (see Table 1), making MDM superior to prior inequality-constrained diffusion models [3,4], as evidenced by Sec 5.1 (Table 3,4,5, Fig 5) and Appendix C.1 (Table 9,10,11). Finally, we note that our MDM is the first to explore inequality-constrained domains as a new mechanism for watermarked generation, which is otherwise absent in [1,2,3,4].
- As we do acknowledge that [1,2] are important references, we will include them, along with the above discussions, in the following revision (kindly note that we are not allowed to revise the submission at the moment).
---
**2. Generalization to other constrained domains**
- Following the discussion in **1.**, we re-emphasize that the goal of our MDM is to generate samples confined to “**inequality constraints**”. In principle, MDM can generate samples confined to *any* convex constraint set given its mirror map. In Sec 3.2, we exemplify three mirror maps, each for a different type of constraint set, mainly to demonstrate how efficient, closed-form mirror maps can be constructed for most inequality constraints considered in prior works [3,4]. This, as also recognized by Reviewer i5Cc, can be beneficial to a broader audience. For general convex constraint sets, MDM still remains applicable by constructing, e.g., log-barriers. While this may induce additional cost at inference, it introduces *no* computational overhead at training, which, crucially, preserves all desired computational advantages from Euclidean-space diffusion models (see Table 1).
- As mirror maps are, by construction, built from convex functions, our MDM is also subjected to their domains (see L283-284 in Sec 6). We note, however, that MDM may still be applicable to general (non-convex), yet compact, constraint sets by adopting the diffeomorphism discussed in [4] (see their Fig 2 (iv)). Constructing simulation-free diffusions (like our MDM) for more general inequality constraints is an interesting future direction worth pursuing, and we thank the reviewer for raising these comments.
---
**3. Large-scale experiments on watermarking ImageNet 256x256**
- We appreciate the reviewer's comment. In the **PDF attached above in our response to all reviewers**, we present additional results of our MDMs on large-scale image datasets, specifically ImageNet 256x256, for both conditional and unconditional watermarked generation. For conditional generation, we focus on image restoration tasks, generating clean, watermarked, images conditioned on degraded inputs, using both MDM-dual and MDM-proj. For unconditional generation, we include mainly MDM-proj due to time constraints during rebuttal. Both MDM-proj and MDM-dual consider a polytope constraint set whose parameters are chosen such that the watermark yields high precision (>95%) and low false positive rate (< 0.001%). Specifically, we set $m$=100, $b$=1.2, $c$=-1.2, and $a_i \in \mathbb{R}^{196608}$ orthogonal Gaussian random vectors. Similar to Sec 5.2, we initialize networks with pretrained checkpoints [5,6].
- Our qualitative results suggest that both MDM-dual and MDM-proj scale to high-dimensional applications and are capable of embedding invisible watermarks in high-resolution images. Note that all non-MDM-generated images, despite being indistinguishable, actually violate the polytope constraint, whereas MDM-generated images always satisfy the constraint. These results highlight the scalability of our MDM for large-scale, high-dimensional applications. We will release our codes upon publication.
---
[3] Diffusion Models for Constrained Domains
[4] Reflected Diffusion Models
[5] https://github.com/openai/guided-diffusion
[6] https://github.com/NVlabs/I2SB
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: The rebuttal partially addresses my concerns and I thus increase my score. | Summary: The submission "Mirror Diffusion Models for Constrained and Watermarked Generation" describes a new approach to generate constrained data with diffusion models. Using mirror maps, the diffusion process proceeds as usual in an unconstrained space, but the generated data can be converted into the constraint space through the bijective mirror map.
The submission discusses (toy) applications in constrained generation onto a ball and a simplex, and an application of this approach to watermarking of diffusion model outputs.
Strengths: The proposed approach using tools from convex optimization to encode convex constraints into the generation process is simple and elegant. The submission also spends a good amount of time on exposition of examples for commonly-used mirror maps, which I think will be very beneficial to the wider readership.
The application to watermarking is a suprising, but interesting connection that the submission draws, discussing an immediate beneficial application of the proposed approach. Overall I consider this a strong submission.
Weaknesses: I think I fully understood the submission up to Section 5.2, which is quite compressed (probably due to space reasons), and I found it hard to fully understand. I'll enumerate these questions below in the questions section.
Otherwise, I see no major weaknesses. There are some typos, which I'll briefly mention:
* In standard Euclidean spaces, tractable marginal can be
* The marginal q(yt−1 |yt , y0 ) hints the optimal reverse
Appendix:
* Boarder impact
* polytop
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: * Just for my understanding: For MDM-proj, the output of a given, pretrained diffusion model, is project onto the constraint set after generation? And for MDM-dual, a new model is trained/fine-tuned is trained on contraint-projected samples?
* The polytope constraint for the watermark is hard for me to understand intuitively, what is being constrained here? The pixel space of the generated image is constrained to fulfil a random projection into a random interval? Intuitively, it is not clear to me why the impact of this constraint on the generated images is not larger?
* Related to question 1, does this mean that only MDM-proj can use multiple tokens for multiple users, and MDM-dual needs to retrain a new model for every new token?
* Why is the precision of the watermark less than 100%? From my understanding of the preceding sections, a constraint violation should be excluded? Why are images being generated that violate the constraints?
Overall, I hope the authors could clarify these questions and rewrite Section 5.2 to be easier to understand.
A few more questions/comments:
* The functions phi are almost Legendre functions in the sense that they are essentially smooth and essentially strictly convex. Is the twice-differentiability needed in principle (aside from Eq.(8)?
* The reduction to a bjiective map in Eq.(18) is a bit unsatisfying. I think it would be neater to derive the base function for this tanh shift
* Slightly related, Legendre functions do not seem to be strictly needed to generate the mirror maps in this work? Technically, a strictly monotone operator (as defined e.g. in "Convex Analysis and Monotone Operator Theory in Hilbert Spaces" ) would be sufficient? (This is more of a side question concerning the notation, I don't think that too much is gained by generalizing this way)
* Is it possible to derive p-values for the watermarked model to quantify the uncertainty about the watermark, as was done in Kirchenbauer 2023? From a practical perspective watermark, precision is not the primary objective for a watermark, but precision with a low false-positive rate of the resulting detection scheme.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors sufficiently discuss broader impact in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Clarification on watermark generation in Sec 5.2**
- The reviewer’s understanding of MDM-proj and MDM-dual is correct: MDM-proj projects samples generated by pretrained diffusion models to a constraint set whose parameters (i.e., tokens) are visible only to the private user. In contrast, MDM-dual learns a dual-space diffusion model from the constrained-projected samples. Hence, as conjectured by the reviewer, MDM-proj allows multiple tokens for multiple users, whereas MDM-dual, like other MDMs in Sec 5.1, is constraint-dependent.
- We view images as samples in the vectorized image space $\mathbb{R}^d$. To watermark a, e.g., 64x64 image ($d$=3x64x64=12288), we consider a polytope constraint set, $\mathcal{M}:=${$x\in \mathbb{R}^d: c_i < a_i^\top x < b_i, \forall i \in [m]$}, where $a_i \in \mathbb{R}^d$ are orthonormal vectors in the **vectorized image space**. Hence, projecting $x$ to $\mathcal{M}$ constrains the signed distances of $x$ to these linear-independent hyperplanes (defined by $a_i$’s) to be within the intervals $(c_i, b_i)$.
- The impact of the polytope constraint on generation quality depends on the choice of {$a_i, b_i, c_i$}$_{i=1}^m$, as illustrated by the ablation study in Fig 8. For image applications, we find that watermarks can be invisibly embedded with high-frequency $a_i$’s, larger interval of $c_i$’s and $b_i$’s, and a larger $m$. This is because high-frequency perturbations often preserve the semantic structure of images. While a larger interval improves the generation quality at the cost of loosening the constraint set, the overall precision is tightened up by increasing the number of constraints, $m$.
- Similar to Kirchenbauer 2023, we reject the null hypothesis and detect the watermark if the sample produces no violation of the polytope constraint, i.e., if $x \in \mathcal{M}$. The reviewer is correct that both MDM-proj and MDM-dual generate samples that always satisfy the constraint. This readily implies 100% recall (`TP/(TP+FN)`) and 0% Type II error (`FN`), yet *not* necessarily 100% precision (`TP/(TP+FP)`) due to false-positive (`FP`) samples. Specifically, `FP` samples are those that are actually true null hypothesis (i.e., *not* generated by MDM) yet accidentally fall into the constraint set, hence being mistakenly detected as watermarked.
(Notation: `TP`,`FP`,`TN`,`FN` respectively denote the numbers of True Positives, False Positives, True Negatives, and False Negatives.)
- In the table below, we report the precision, false-positive rate (FPR), and accuracy of our MDMs that are used to generate all image figures (Fig 1,6,10,11). We stress that on both datasets, our MDMs achieve high accuracy & precision with low FPR.
| | Precision (`TP/(TP+FP)`) | FPR (`FP/(FP+TN)`) | Accuracy (`(TP+TN)/(TP+FP+TN+FN)`) |
| --- | --- | --- | --- |
| FFHQ | 93.3% | 0.072% | 96.4% |
| AFHQv2 | 92.7% | 0.079% | 96.1% |
- We do acknowledge that Sec 5.2 could be stated clearly with additional paragraphs, yet we were limited by the space constraint at the submission time. Admitted that we are still not allowed to revise the submission, we will rewrite Sec 5.2 and include these discussions in the later revision. We thank the reviewer again for raising these comments.
---
**2. Relation of $\phi$ to Legendre function**
- We first thank the reviewer for raising this interesting comment. We based our notation (in Sec 3) on the literature of Mirror Langevin Dynamics (MLD), e.g., [1]. There $\phi$ was indeed required to be of “Legendre type [2]” and twice-differentiable. This was because the twice differentiability was used to induce a (Riemannian) metric on the mirror space, from which MLD is constructed. For our MDM, however, twice differentiability is indeed needed only for Eq 8 as the reviewer pointed out. Continuous differentiability, i.e., $C^1$, would suffice for training. We very much appreciate this comment and will clarify this detail in the revision.
- In this view, we may indeed be able to generalize our approach to strictly monotone operators. Meanwhile, one handy feature of using a convex $\phi$ is, the inverse of its Hessian (inverse in the sense of matrix inverse) is the Hessian of its dual. Algorithmically, this might not be necessarily, but how it may affect the performance is not yet clear to us. Therefore, this generalization, despite beyond the scope of our submission, will be an interesting future direction, and we also think other applications such as constrained generation in function spaces would be very interesting. Thank you for asking!
---
**3. Typo & other clarifications**
- All noted typos will be fixed in the revision. Additionally, we will rewrite Eq 18, the tanh shift, in the form of base functions. We thank the reviewer for the meticulous reading!
---
[1] The Mirror Langevin Algorithm Converges with Vanishing Bias
[2] Convex Analysis (Rockafellar 1970)
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thank you for providing this detailed response, it would be great if these clarifications could be included in the update. I have no further questions! | Rebuttal 1:
Rebuttal: ### **Author response to all reviewers**
We thank the reviewers for their valuable comments. We are excited that the reviewers identified the novelty of using mirror maps in learning constrained diffusion models ( **Reviewers** ****i5Cc****, ****kFs5, Bc59****, ****VdTP****), acknowledged our superior empirical results over prior diffusion models [1,2] and distinct application to watermarked generation (**Reviewers** ****i5Cc****, ****Bc59****, ****VdTP****), and found the paper well-written (**Reviewers** ****kFs5****, ****Bc59****, ****VdTP****). We believe MDM takes a significant step toward a new class of tractable diffusion models for constrained sets and watermarked generation.
As *all* reviewers recognized our technical contribution, one of the criticisms stemmed from the insufficient evaluation of our MDM on larger-scale datasets (raised by Reviewers kFs5, VdTP). In the **attached PDF (below)**, we present additional results of our MDMs, on ImageNet 256x256, for both conditional and unconditional watermarked generation. Our qualitative results suggest that MDMs can be scaled to high-dimensional application, and are capable of embedding invisible watermarks in high-resolution images. While all non-MDM-generated images, despite being indistinguishable, violate the constraint/token, our MDM satisfies the constraint by construction. We highlight these new results, gained uniquely in MDMs by constructing efficient mirror maps, that are otherwise absent in prior diffusion methods [1,2].
We tried our best to resolve all raised questions in the individual responses below. If you have any additional questions/comments/concerns, please let us know. We always appreciate the reviewer's precious time in providing their valuable feedback.
[1] Diffusion Models for Constrained Domains
[2] Reflected Diffusion Models
---
**Please find additional figures in the PDF below**:
Pdf: /pdf/5d1c653e87c6246c71e40a96d294af94336ec708.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
FreeMask: Synthetic Images with Dense Annotations Make Stronger Segmentation Models | Accept (poster) | Summary: The authors propose to use paired synthetic data to train a semantic segmentation model. Specifically, a filtering strategy and a resampling strategy are proposed to control the quality of synthetic data. In this way, the paired synthetic data could further promote the standard segmentation model.
Strengths: - The overall idea makes sense, and the implementations are reasonable.
- The paper is well written and easy to follow.
Weaknesses: - There may be conflict between filtering hard pixels and re-sampling hard masks, because the re-sampled hard samples may be filtered out. More in-depth analysis could be appended to solve this concern.
- The performance gains (e.g., 48.5 → 50.6) seems not significant enough, considering that the proposed method introduce too many extra heavy processings. Specifically, 1) fine-tune the generative model on the specific dataset; 2) pre-train a segmentation model, i.e., Line 197; 3) finally train the desired segmentation model.
- The effectiveness of the proposed method depends on the gap between the generative model and specific dataset, and thus the in-depth analysis about the adaption (fine-tuning) of generative model is necessary. On the one hand, the generative model is hard to adapt when the specific dataset is small. On the other hand, the generative model is harder to provide rich cases when the specific dataset is large. Some in-depth analyses could be appended.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See *Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See *Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are sincerely grateful for your efforts and constructive feedback. We hope the concerns are well addressed.
**Q5-1: Conflict between two strategies: the generated additional hard samples are filtered out**
Our filtering strategy does not discard all hard samples. It mainly aims to detect noisy synthetic regions, rather than ignoring all hard samples. As described in L205-207 of our main paper, we set a tolerance margin for filtering, which enables the middle-hardness samples to be kept and mainly removes synthesis failure cases with extremely large losses, as also evidenced by the quantitative results in our response Q2-7 to Reviewer 6xi1. Therefore, the additionally generated images for hard masks are beneficial.
**Q5-2: Improvement (48.5 $\rightarrow$ 50.6) is not significant enough, especially when the method is sophisticated**
**[Our improvement is much larger than model-centric works]** Actually, the +2.1% gain over the fully-supervised performance on ADE20K is already a tremendous improvement. Across our investigated seven architectures (appendix Tab 1), we improve the fully-supervised baseline by +2.0% on average. **In comparison**, the hot research line of model-centric works, *e.g.*, SegViT [NeurIPS'22], only improves its precedent StructToken from 50.9% $\rightarrow$ 51.3% (+0.4%).
**[Our pipeline is straightforward]** Moreover, our framework is straightforward to implement and easy to deploy. We do not have to use synthetic images for pre-training and then fine-tuning. We can simply combine them with real images for joint-training. There is only a single stage of training in this case. We demonstrate the strong results under this joint-training scenario in main Tab 1 and appendix Tab 1. It is also worth noting that, once our synthetic images have been generated for a specific application scenario, they can be saved for later use and benefit various types of model architectures. We will also open-source our large-scale ADE20K and COCO synthetic training images for our community to use.
**Q5-3: How to ensure generation quality with small datasets? Will the effectiveness be diminished on large datasets?**
**In case of small datasets:**
- **[Intuitive analysis]** Owing to the highly capable pre-trained text-to-image diffusion model, the generation quality is still very promising even with limited fine-tuning images. A good example of this is DreamBooth, which can only use a few (3-5) images to personalize a generation model. More importantly, even if the synthesis quality is not promising in some rare scenes, our proposed filtering strategy can safely ensure the remaining synthetic regions are relatively clean. In fact, considering that small datasets are especially hungry for training data, our delicately processed synthetic set will be precious to them and hopefully enhance the fully-supervised baseline remarkably.
- **[Quantitative results]** To validate this, we select a subset of 1K images from ADE20K. We fine-tune the Stable Diffusion model only with these limited images, and produce 10K synthetic images (already filtered and re-sampled) with this fine-tuned generator. As a result, merely with 1K real images, the validation mIoU is 22.8%. After jointly trained with our 10K synthetic images, the performance is tremendously boosted from 22.8% $\rightarrow$ 28.6% (**+5.8%**). This improvement indicates our framework can work very well in limited-data scenarios.
**In case of large datasets:** Furthermore, as for the concern about a large real dataset, we have fully demonstrated the effectiveness of our method on the large-scale COCO in main Tab 2, main Fig 5, and appendix Tab 2 (the average improvement is 1.2% across **six** architectures). Note that COCO is one of the largest datasets with challenging taxonomy in semantic segmentation, consisting of 118K training pairs. Lastly, we hope to emphasize that it is extremely difficult and costly to collect million-scale training pairs in semantic segmentation (the latest Segment Anything dataset lacks semantic labels). Therefore, we believe our proposed roadmap of utilizing high-quality synthetic pairs to complement the real dataset is of great value.
---
Rebuttal Comment 1.1:
Title: Looking forward to further feedback
Comment: Dear Reviewer VTnr,
We are sincerely grateful to you for the precious time and selfless efforts you have devoted to reviewing our paper.
We would like to inquire whether our response has addressed your concerns and if you have the time to provide further feedback on our rebuttal. We are more than willing to engage in further discussion.
Best regards,
Authors of paper 6586.
---
Rebuttal Comment 1.2:
Title: Looking forward to further feedback
Comment: Dear Reviewer VTnr,
We are sincerely grateful to you for the precious time and selfless efforts you have devoted to reviewing our paper.
We have provided our detailed response to each concern. Since the deadline for reviewer-author discussion is approaching, we would like to inquire whether our response has addressed your concerns and if you have the time to provide further feedback on our rebuttal. We are more than willing to engage in further discussion.
Best regards,
Authors of paper 6586.
---
Reply to Comment 1.2.1:
Comment: Dear Reviewer VTnr, thank you very much for your precious time and great contributions.
Since we are approaching the discussion deadline in 10 hours, we would like to ask if there is any further feedback on our rebuttal.
We are greatly motivated that Reviewer 6xi1 has improved the score from 3 to 5 (borderline accept), Reviewer FZD9 has improved from 4 to 6 (weak accept), and Reviewer jaKB keeps a score of 7 (accept). Besides, we believe our work has improved a lot from your constructive feedback. For example, we will add detailed experiments and discussions about the effectiveness of our method in case of small or large datasets (also provided above in the [rebuttal](https://openreview.net/forum?id=XOotfgPiUF¬eId=d5NhljZDiI)).
We are eagerly looking forward to your further attention. Thank you very much.
Best regards,
Authors of paper 6586. | Summary: The paper introduces an automatic dataset generation with mask-to-image translator. The proposed dataset generator enables to generate controllable and consistent semantic labels in generated images. The labels can be treated as fully supervised teachers from a generator and create pre-trained segmentation models under the (synthetic) supervised learning. Moreover, the dataset generation framework serves difficulty levels inside of the contents in an image. In the framework, the authors utilize FreestyleNet pre-trained with StableDiffusion in order to translate from mask to image (mask-to-image) for the pre-training pairs.
Strengths: - This paper is clearly written and easy to understand. The presentation to describe the proposed method (e.g., Figures 2, 3, 4) is high quality and convincing in visualization.
- The paper could serve as a good example for data-driven approach in semantic segmentation tasks.
- Experiments and their results cover multiple aspects of the performance of the synthetic pre-training. The paper will serve a good inspiration to others in synthetic pre-training and related topics.
- The two different aspects, 'noise filtering (Section 3.2)' and 'image re-sampling (Section 3.3)' in synthetic pre-training are very reasonable approaches. Indeed, these two have been shown to be effective as shown in Tables 4 and 5. These two approaches are complementarily improve the segmentation performance in terms of mIoU.
Weaknesses: The reviewer does not find a critical weakness in the paper. However, it could be shown a little weaknesses.
- The paper includes the descriptions as 'controllable' in l.38 and l.179, however, could the proposed approach make more controllable dense annotations? For example, a semantic mask can be manually edited in addition to the image dataset as is. Could the authors consider the kind of flexibly edited approach by humans? As an example of image editing and generation, the reviewer can raise GauGAN from [Park+, CVPR19]. The reviewer doesn't think any image editing should be acceptable, but it would be effective in terms of making the synthetic segmentation dataset more flexible.
[Park+, CVPR19] Taesung Park et al. "Semantic Image Synthesis with Spatially-Adaptive Normalization" in CVPR 2019.
- This is not a critical weakness, however, can alone with the synthetic pre-training surpass the real-image dataset? In fact, the single approach with synthetic images can reach comparative scores as shown in Tables 1 and 2. In some cases, the gap is quite close each other (e.g., 48.5 vs. 48.3 in SegFormer-B4 on ADE20k dataset). If it is possible on paper limitation, adding the discussion would make the paper more valuable.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Can the proposed dataset generator with FreestyleNet precisely generate the training images by taking care of object boundaries? As the paper described in l.144-145, "We observe it is more precise in structures to synthesize images from masks than predict masks for images, especially in boundary regions". However, the reviewer is concerning that the quality of object boundaries may affect the recognition accuracy even if the process passed as mentioned in Section 3.2 filtering noisy synthetic regions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: There are no negative limitations and societal impacts. On the contrary, the paper alleviate the privacy issues by means of synthetic image (pre-)training. The direction can effectively accelerate the kind of ethical problems in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are sincerely grateful for your appreciation of our work. Many thanks for your efforts and constructive feedback. We hope the concerns are well addressed.
**Q4-1: How can the proposed method be more "controllable"?**
Thank you. We claim our framework of learning from synthetic images is more controllable, because we believe that in future works, we can edit the semantic masks to construct a targeted synthetic set. For example, considering the class imbalance issue, we may use *Copy-Paste on semantic masks* to produce new layouts that are inclined to rare classes to re-balance the class distribution. During the process, we also need to take the structure and object co-concurrence into account. In addition, we, or other works, may design a new generator for mask synthesis, which is conditioned on class distributions directly.
**Q4-2: Can synthetic images surpass real images alone?**
Since current results achieved by synthetic images are extremely close to real images, we expect synthetic images alone can surpass real images in the future when (1) more powerful image generators are utilized and (2) more effective processing strategies are proposed for synthetic images.
**Q4-3: The quality of boundary alignment**
Thank you. As shown in Fig 1 of our main paper and according to our manual check, we find the boundary alignment between synthetic images and conditioned masks is highly precise. The alignment quality is much higher than widely adopted pseudo labeling strategies that predict masks from input images. We safely conjecture that this is because our input semantic masks are encoded in the discrete one-hot format, which is "sharp" and much easier to recognize boundaries than the "smooth" RGB values that DatasetGAN feeds.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: Thank you so much for the authors. The Q&A is one of the most important things in the future generative AI and learning. The reviewer encourage the authors, and will keep the paper rating as "7: Accept". Thanks again for the discussion.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer jaKB,
We are deeply grateful for your great appreciation of our work. Thank you so much for all the precious time you have devoted and the constructive feedback you have provided. We will include all your feedback in our final version.
Best regards,
Authors of paper 6586. | Summary: The Paper talks about using synthetic images generated using generative models as the training set to achieve stronger semantic segmentation models. The efficacy of the model is evaluated on ADE2K and COCO dataset, using SegFormer model. Authors propose pretraining with synthetic and joint images to evaluate which one works the best and in which scenario. Uses sampling and filtering training mechanism to further help the results.
Strengths: 1. The paper demonstrates a clear structure, effectively explaining the fundamental modules and training procedure involved. It successfully addresses the task of scene understanding by leveraging annotations obtained from generative models, thereby enhancing the results of the semantic segmentation task.
2. The inclusion of a self-adaptive module within the training pipeline is a valuable contribution. This module effectively refines erroneous or spurious training examples, leading to improved model performance.
3. The paper incorporates a sampling strategy that focuses on hard-to-learn cases. By prioritizing these challenging instances, the overall performance of the model is enhanced, resulting in better final results.
Weaknesses: Methods used to improve the performance.
1. Mining Hard Examples: This is not something novel and (different flavors of it) has been used in many previous work to help in training better examples in case semantic segmentation and object detection.
2. Remove extraneous and harmful samples: Again the filtering mechanism proposed here is not that novel as previous works in the domain of active learning have again used similar techniques to improve training.
3. Use off-the-self models both for generation and training the model, nothing novel in that regard.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Would be interesting to know, what is the class frequency for the new generated dataset used to train the model. Does it follow the similar class frequency distribution as seen in ADE20K and COCO?
2. What are the classes for which the improvement is seen to be significant?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are sincerely grateful for your efforts and constructive feedback. We hope the concerns are well addressed.
**Q3-1: Mining hard examples are not novel**
We are only related to existing works (*e.g.*, OHEM) *from the aspect of motivation*. It is indeed a widely shared motivation. However, we use totally different approaches. We produce more synthetic images for harder masks, which is fundamentally different from OHEM that simply ignores high-confidence pixels. We allocate synthesis quotas discriminatively, while OHEM only performs selection. Our designed method is well suited to our framework that aims to learn from synthetic data. You may also refer to our response Q2-1 to Reviewer 6xi1 for more details. In Q2-1, we also quantitatively demonstrate the superiority of our re-sampling strategy over OHEM.
**Q3-2: Compare our filtering strategy with the arts in active learning**
Our filtering strategy *discards noisy synthetic regions* (failure cases during synthesis), while active learning aims to find the most informative samples for humans to annotate. The most informative samples in active learning mostly exhibit large or middle losses, while large-loss samples in our scenario are noisy and need to be discarded. Therefore, our motivations are totally different, even contrary to active learning.
**Q3-3: Using off-the-shelf models both for generation and training the model is not novel**
We hope to highlight that we are indeed a pioneering work to improve the *fully-supervised semantic segmentation performance* with *synthetic data*. Our motivation and our framework are novel in several aspects. We have thoroughly compared our work with previous works in Related Work--"Learning from synthetic images". **[Differences]** Briefly summarized here:
- **[Few-shot *vs*. fully-supervised]** Existing works only focus on a constrained scenario, *e.g.*, few-shot labels, while we address the challenging but widely acknowledged fully-supervised scenario.
- **[Classification *vs*. semantic segmentation]** Existing works mostly address the classification task, which is cheap to label and even not so urgent for human labels (unsupervised learning performs quite well), while our targeted semantic segmentation task is highly expensive and laborious to annotate.
- **[Image-to-pseudo-mask *vs*. mask-to-image]** Existing works predict pseudo masks for synthetic images, which is not precise enough, while we adopt a mask-to-image synthesis pipeline to produce better aligned image-mask pairs, as evidenced by our response Q2-5 to Reviewer 6xi1.
- **[Blindly using *vs*. carefully processing synthetic data]** Existing works of learning from synthetic data ignore the importance of processing synthetic data discriminatively, while our proposed filtering and re-sampling can impressively improve the effectiveness of synthetic data and ultimately yield a much stronger model.
We provide more discussions in the global response. Please refer to it if you are still concerned. Thank you very much.
**Q3-4: Class frequency of synthetic set and real set**
Since we synthesize images conditioned on semantic masks from the real training set, the class frequency of our synthetic set is exactly the same as the real set if no filtering and re-sampling strategies are applied. Then, if we apply filtering and re-sampling strategies to the synthetic set, its class frequency will be changed. Please refer to our uploaded global PDF for detailed visualizations of the class frequency. Thank you.
**Q3-5: The most improved classes**
We list the most improved ten classes on ADE20K (the gain is measured by IoU): (1) ship: +68.19, (2) microwave: +48.72, (3) arcade machine: +45.85, (4) booth: +45.66, (5) oven: +30.86, (6) skyscraper: +23.23, (7) swimming pool: +15.52, (8) armchair: +14.6, (9) hood: +14.43, (10) wardrobe: +13.24.
---
Rebuttal Comment 1.1:
Title: Looking forward to further feedback
Comment: Dear Reviewer FZD9,
We are sincerely grateful to you for the precious time and selfless efforts you have devoted to reviewing our paper.
We would like to inquire whether our response has addressed your concerns and if you have the time to provide further feedback on our rebuttal. We are more than willing to engage in further discussion.
Best regards,
Authors of paper 6586. | Summary: This paper proposes to generate densely annotated synthetic images with generative models to help supervise the learning of fully supervised semantic segmentation frameworks. To improve the effectiveness of synthetic images, the authors further design a robust filtering criterion to suppress noisy synthetic samples at the pixel and class levels and propose an effective metric to indicate the hardness of semantic masks where they sample more synthetic images for harder masks. Ablation studies validate the effectiveness of the proposed method.
Strengths: 1. The logic of the article is generally clear, and the method is easy to understand.
2. The ablation experiments of this paper indicate the effectiveness of the proposed method to a certain extent.
Weaknesses: 1. The proposed strategy termed re-sampling synthetic images based on mask-level hardness is somehow like “Online Hard Example Mining” (OHEM) which is widely used in computer vision area including semantic segmentation, and the adopted metric to measure the sample hardness is also the widely used average losses of all pixels in the input image. Where is the novelty of this part? I don't think generating more hard samples should be the contribution of this section since it is more likely to belong to the contribution of Section 3.1.
2. The analysis in Section 4.4 for Table 6 is not convincing in my eyes. First, after adopting “Filtering & Re-sampling”, N_max no longer denotes for the number of synthetic images used for training, thus the comparison in Table 6 is unreasonable. If I understand correctly, the number of used synthetic images should be n_p in Eq. (2), which may result in a wrong conclusion in Line 339-348. Second, where is the performance ceiling when setting N_max after adopting “Filtering & Re-sampling”? It looks like setting larger N_max (> 20) will yield better segmentation results. Finally, how to set p in Eq. (2). The reviewer does not find this detail in both paper and the supplement materials.
3. Why there are no quantitative comparisons between the proposed method and previous methods like DatasetGAN and BigDatasetGAN? The authors argue that “the main drawback of such methods is the involvement of expensive human efforts.” So, how about applying a simple pseudo labeling strategy on these generated images and comparing the results between image-pseudo labeling strategy and your proposed mask-to-image synthesis strategy? If the results are comparable, where are the advantages of using your proposed method? The reviewer believe it is important to compare your method and previous similar methods quantitatively to show the novelty of this paper.
4. It seems that the performance improvements in Table 3 for Mask2Former is limited. To my knowledge, just simply run Mask2Former for two times may also bring such improvements. Could the author give some analysis about the limited gains for Mask2Former?
5. The filtering strategy in Section3.2 is naïve and not reliable in my eyes. Specifically, could the authors provide any quantitative results to show that the proposed filter strategy could filter noisy synthetic region rather than some hard samples?
6. Previous studies like “Focal Loss for Dense Object Detection” indicate that OHEM is unreasonable. Whether the authors compare your re-sampling strategy with some objective-function-based strategy like Focal Loss, weighted cross entropy loss to show the effectiveness of your method?
7. Why higher mIoU must be a good point is Table 4? For example, if your method could generate some densely annotated synthetic images with other domains which is different from the training and testing image? Whether it would lead to the decrease of mIoU but make the segmentor adapt to broader application scenarios? The reviewer thinks the latter is more important.
8. Could the authors give the results of performing the methods in Section 3.2 and 3.3 on the real training images? I mean filter the real images with the proposed strategies and re-train the model to see the importance of the generated images.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions are listed in the weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations have been discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are sincerely grateful for your efforts and constructive feedback. We hope the concerns are well addressed.
**Q2-1: Our re-sampling strategy is similar to OHEM**
In L236-L241, we have compared our re-sampling method with OHEM. In semantic segmentation, OHEM ignores high-confidence pixels and only computes the average loss on low-confidence pixels.
**[Relation]** Our motivations are related, *i.e.*, both emphasizing hard samples (however, our hard samples are semantic masks, while OHEM is image pixels).
**[Difference]** Our practices are fundamentally distinguished. We "generate" additional hard samples for models to sufficiently learn, while OHEM essentially only performs a one-hot "re-weighting" (loss weight 0 for high-confidence pixels, and weight 1 for low-confidence pixels).
**[Superiority]** We *quantitatively* compare our re-sampling method with OHEM below, proving our method is evidently superior to OHEM. (Results below are obtained by training solely on synthetic images with Segmenter-ViT-S, and the filtering strategy is applied to all methods).
|Baseline|+ OHEM (thresh=0.7, min kept=100K)|+ Our re-sampling|
|:-:|:-:|:-:|
|44.0|44.2|**45.4**|
**Q2-2: How to set $p$ in Eq 2**
As mentioned in L233 "for the $p$-largest-hardness mask", the $p$ denotes *the rank of a mask* in terms of hardness, ranging from 0 to $(N-1)$ as an integer. According to this $p$ (hardness rank), we then determine the synthesis quota (number of synthetic images) for a mask by Eq 2.
**Q2-3: Tab 6 is not convincing, because $N_{\max}$ does not denote the number of synthetic images when using re-sampling**
In the re-sampling case, we actually use fewer synthetic images than the non-re-sampling counterpart, so the better performance from our re-sampling method is convincing. Specifically,
- when **not using** re-sampling, all semantic masks share the same synthesis quota, *i.e.*, always $N_{\max}$ synthetic images *from a single mask*.
- when **using** re-sampling, most semantic masks are equipped with fewer than $N_{\max}$ synthetic images. The concrete synthesis quota for each mask is determined by Eq 2, which evenly distributes the number of synthetic images for all semantic masks from 1 to $N_{\max}$. The number of total synthetic images is nearly halved after the re-sampling process. Hence, our superiority over the non-re-sampling counterpart is convincing.
**Q2-4: The performance ceiling when increasing $N_{\max}$**
Please refer to our response Q1-5 to Reviewer 3GaN.
**Q2-5: Comparison with DatasetGAN, ***i.e.***, annotating pseudo semantic masks for synthetic images**
Thank you for your constructive feedback.
(1) Following your advice, we predict pseudo masks for our synthetic images with the SOTA model (ViT-Adapter-BEiTv2-L-IN22K, ICLR'2023) on ADE20K. The mIoU between predicted pseudo masks and GT masks is 49.79. The final validation mIoU is compared below.
|Image-to-Pseudo-Mask|Mask-to-Image (Ours)|
|:-:|:-:|
|44.0|**45.4**|
(2) Besides, as an extension, we further borrow the real COCO data as an unlabeled source for ADE20K. We validate whether COCO images along with the pseudo labeling strategy can benefit our targeted ADE20K. The results are listed below (\*: reproduced by us).
|Real Only\* (Segmenter-ViT-S)|Real + COCO (Pseudo labeling)|Real + Synthetic (Ours)|
|:-:|:-:|:-:|
|45.8|46.2|**48.0**|
**Q2-6: Improvement with Mask2Former-Swin-L-22K is not significant (56.0 $\rightarrow$ 56.4)**
As noted in Tab 3 caption, this specific model is sufficiently pre-trained on the extremely large-scale ImageNet-22K. Therefore, its hunger for downstream fine-tuning data is diminished. For other Mask2Former models, our framework performs impressively, *e.g.*, **+3.3%** with Swin-T and **+1.6%** with Swin-S (appendix Tab 1 \& 3).
**Q2-7: Could the filtering strategy filter noisy synthetic regions rather than hard samples?**
We present average losses on filtered/non-filtered real/synthetic regions below. First, we compare the loss on non-filtered regions (id 1 and 2), the loss magnitude of real and synthetic regions are close. Then when we compare the loss on filtered regions (id 3 and 4), it is obvious that the synthetic loss becomes much larger (nearly 3$\times$) than the real loss. Thus, we can conclude there must exist abundant noise in the filtered synthetic regions.
|Real non-filtered (id: 1) |Syn non-filtered (id: 2)|Real filtered (id: 3)|Syn filtered (id: 4)|
|:-:|:-:|:-:|:-:|
|0.281|0.363|1.100|3.317|
**Q2-8: Compare our re-sampling with objective functions, ***e.g.***, Focal loss and weighted CE loss**
Results are listed below, where our re-sampling strategy performs much better than the mentioned objective functions.
|Baseline|+ Focal loss|+ Weighted (by class frequency) CE |+ Weighted (by val IoU) CE|Our re-sampling|
|:-:|:-:|:-:|:-:|:-:|
|44.0|43.9|37.8|43.9|**45.4**|
**Q2-9: Why higher mIoU in Tab 4 is better, what if considering the robustness?**
We manage to examine the capability of our model to deal with unseen domains. Concretely, we transfer our model trained on ADE20K to COCO. We measure mIoU on the 46 overlapped classes. As proved below, our two strategies are still effective for images from unseen domains.
|Baseline|+ Filtering |+ Re-sampling|+ Filtering \& Re-sampling|
|:-:|:-:|:-:|:-:|
|37.7|40.3|39.4|**41.3**|
**Q2-10: Apply the two strategies to real images**
As for our re-sampling strategy, since there is only a single real image available for a semantic mask, we resort to over-sample (repeat) more real images for harder masks, whose number is decided by Eq 2. As shown below, our re-sampling strategy can still boost the real dataset. And it is expected that the filtering practice downgrades the result on real dataset, because the real dataset is free from noise. This further demonstrates that our filtering design is well suited to synthetic data.
|Baseline|+ Re-sampling|+ Filtering |+ Filtering \& Re-sampling|
|:-:|:-:|:-:|:-:|
|45.8|**46.7**|43.1|43.5|
---
Rebuttal Comment 1.1:
Title: Looking forward to further feedback
Comment: Dear Reviewer 6xi1,
We are sincerely grateful to you for the precious time and selfless efforts you have devoted to reviewing our paper.
We would like to inquire whether our response has addressed your concerns and if you have the time to provide further feedback on our rebuttal. We are more than willing to engage in further discussion.
Best regards,
Authors of paper 6586.
---
Rebuttal Comment 1.2:
Comment: Thanks for authors' answers. I'd like to improve my score as Borderline Accept.
---
Reply to Comment 1.2.1:
Title: Thank you for your appreciation of our work
Comment: Dear Reviewer 6xi1,
Thank you very much for your acknowledgment of our rebuttal and appreciation of our work. Our work has improved a lot from your constructive feedback. Thank you.
Best regards,
Authors of paper 6586. | Rebuttal 1:
Rebuttal: **[Contributions]** Our technical contributions mainly lie in three folds:
- **[New target \& new roadmap]** We present a new roadmap to enhance *fully-supervised* semantic segmentation via generating *densely annotated* synthetic images with generative models. Our data-centric perspective is orthogonal to the widely explored model-centric (*e.g.*, network architecture) perspective.
- **[New problem]** We highlight the necessity of designing processing strategies for synthetic images. With our present simple filtering and re-sampling strategies, the model trained with synthetic images can achieve comparable performance with the counterpart of real images, *e.g.*, 48.3 *vs*. 48.5 mIoU on ADE20K and 49.3 *vs*. 50.5 on COCO-Stuff.
- **[Stronger performance]** We achieve 2.0% improvement on average on ADE20K across seven architectures, which is a much larger gap than previous model-centric works achieved (mostly 1%). We believe this will inspire more future works to investigate this promising direction.
**[Uniqueness]** Our work is distinguished from existing works in that:
- **[Few-shot *vs*. fully-supervised]** Existing works only focus on a constrained scenario, *e.g.*, few-shot labels, while we address the challenging but widely acknowledged fully-supervised scenario.
- **[Classification *vs*. semantic segmentation]** Existing works mostly address the classification task, which is cheap to label and even not so urgent for human labels (unsupervised learning performs quite well), while our targeted semantic segmentation task is highly expensive and laborious to annotate.
- **[Image-to-pseudo-mask *vs*. mask-to-image]** Existing works predict pseudo masks for synthetic images, which is not precise enough, while we adopt a mask-to-image synthesis pipeline to produce better aligned image-mask pairs, as evidenced by our response Q2-5 to Reviewer 6xi1.
- **[Blindly using *vs*. carefully processing synthetic data]** Existing works of learning from synthetic data ignore the importance of processing synthetic data discriminatively, while our proposed filtering and re-sampling can impressively improve the effectiveness of synthetic data and ultimately yield a much stronger model.
Lastly, we hope to highlight the filtering strategy only accounts for a (small) portion of our whole work. With this design, we mainly want to emphasize the necessity of processing synthetic images, **which is rarely considered in previous works**. Indeed, we prefer this motivation, compared with concrete instantiations. We think we raise a new problem for future works about *how to better learn from synthetic data*, instead of simply focusing on better synthesis. Besides, we are aware our proposed filtering design is related to existing works (**@3GaN**) in semi-supervised learning, but we adopt similar motivations for totally different scenarios. We will add more discussions in the revised version.
Pdf: /pdf/e9ab8b106109e30d704d069741ece845478fb892.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a method of synthesizing training images and corresponding semantic masks for training a semantic segmentation network. The off-the-shelf semantic image synthesis model, FreestyleNet, is used to generate images from existing semantic masks. Following the proposed re-sampling technique based on mask-level hardness, harder samples are more frequently generated. During training, to avoid the noisy pixel hampering the model training, pixel-level ignoring technique is used. The generated synthetic images are shown to be effective for model training when they are used together with the existing fully supervised labels.
Strengths: - The paper is overall well-written and easy to understand.
- It is interesting that the synthesized images can improve the performance together with fully supervised dataset. This can be practically utilized for many researchers.
Weaknesses: - The proposed method heavily depends on trained mask-to-image generative models. The authors showed that naive generation of synthetic images is not sufficient for training a segmentation network, but the filtering and re-sampling techniques are quite naive. Specifically, ignoring uncertain pixels during training is popularly used for label-efficient learning (e.g., weakly and semi-supervised semanic segmentation).
- Closely related references, copy-paste methods (e.g., [ref1]), are missing. They also augment training data by synthesizing images, but unlike the proposed method, they do not require any additional heavy models. The copy-paste method should be discussed and compared.
[ref1] Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation
- In Abstract, the authors mentioned that "We surprisingly observe that, merely with synthetic images, we already achieve comparable performance with real ones", but I think it is overstated. To synthesize these images, the trained FreestyleNet is required, but FreestyleNet is already trained with real image-mask pairs. In addition, all the technical design (filtering, re-sampling) and values of hyper-parameters are determined with fully supervised validation data. I recommend the authors to tone down the sentence in Abstract.
- I guess the global hardness in Line 229 do not consider the difficulty of segmenting small objects. Intuitively thinking, small objects of hard class should increase the global hardness, but they actually slightly contribute to the global hardness.
- The authors used only the limited number of synthesized images due to the disk issue. I recommend the two additional experiments: 1) the performance change by varying the number of synthesized images. With this trend, we can infer if more synthesized images can further improve the performance. 2) Increase the total number of synthesized images by saving low-resolution images or high-tolerence polygon of masks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Do the authors have a plan to release the code? I strongly recommend the authors to make their code publicly available.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No limitation is discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are sincerely grateful for your efforts and constructive feedback. We hope the concerns are well addressed.
**Q1-1: Novelty of our filtering and re-sampling strategies.**
Please refer to our global response for clarification on the filtering strategy.
Please refer to our response Q2-1 to Reviewer 6xi1 for clarification on the re-sampling strategy. Thank you.
**Q1-2: Discussion with Copy-Paste methods**
Thank you for your constructive advice. We agree with you that we should include mixing-based augmentation methods, *e.g.*, Copy-Paste and CutMix, in our related works. We will provide thorough discussions and comparisons in the revised version.
**[Differences]** Existing mixing-based methods fail to generate realistic images, due to ignoring semantic layouts and co-occurrence. Besides, their *re-assembled* images do not contain any novel objects. In contrast, as supported by our visualizations, our method synthesizes extremely realistic novel images and objects, benefiting the perception task significantly.
**Q1-3: Overstatement about "merely using synthetic images is comparable with real images"**
Thank you for your kind reminder. We will weaken the tone in the revised version.
**Q1-4: Small but hard objects contribute less to mask-level hardness**
Thank you for pointing it out. It is true that small but hard objects currently contribute less to mask-level hardness. We conjecture it will be beneficial if these objects can be correctly handled. However, our work aims to propose a *universal* framework to assist real images *across all classes and all objects*. We hope to highlight the value of synthetic data to high-level perception task, especially in the diffusion-model era. We demonstrate the value of simple data filtering and re-sampling principles to synthetic data, *even without taking special cases into account*. In our future works, we will follow your advice to tackle small objects discriminately.
**Q1-5: Performance change with respect to more synthetic images**
After submission, we have attempted to increase $N_{\max}$ from 20 to 40, which means each semantic mask corresponds to at most 40 synthetic images. However honestly, we do not observe an evident improvement. The results under $N_{\max}=20$ and $N_{\max}=40$ are similar. We think there are at least three reasons for this phenomenon:
- **[Extremely challenging scenario]** We aim to boost the fully-supervised baseline, which is significantly more challenging than the few-shot baseline as existing works do, *e.g.*, DiffuMask. We believe it is more practical in the real world to improve such a challenging but widely acknowledged baseline.
- **[Remarkable improvements have been achieved by $N_{\max}=20$]** As shown in appendix Tab 1, we remarkably improve the fully-supervised baseline from 48.7% $\rightarrow$ 52.0% (**+3.3%**) on ADE20K with Mask2Former-Swin-T. As a comparison, the research line of model-centric works, *e.g.*, SegViT [NeurIPS'22], only improves its precedent StructToken from 50.9% $\rightarrow$ 51.3% (+0.4%). Across our investigated **seven** architectures (appendix Tab 1), we improve the fully-supervised baseline by **+2.0%** on average, which is a much larger gain than existing model-centric works achieve (mostly only 1%, requiring many trials and errors on model designs).
- **[Small-scale semantic masks]** We synthesize images conditioned on limited semantic masks from the real dataset (20K masks on ADE20K). These masks are small-scale and not diverse enough. Thus, multiple synthetic images from a shared mask may be redundant and not informative enough. This makes extra synthetic images bring limited further gain. However, as a pioneering work to enhance the challenging fully-supervised semantic segmentation with synthetic data, we think it is acceptable that there is still some room for subsequent works to refine these designs. In the future, there may be some ways to first generate novel and diverse semantic masks for later image synthesis.
**Q1-6: Code release**
We promise to release all our codes, well-trained models, and training logs upon acceptance.
**Q1-7: No limitation is discussed**
We have indeed discussed it in Appendix Section D. We will prioritize it to the main paper in the revised version.
---
Rebuttal Comment 1.1:
Title: Looking forward to further feedback
Comment: Dear Reviewer 3GaN,
We are sincerely grateful to you for the precious time and selfless efforts you have devoted to reviewing our paper.
We would like to inquire whether our response has addressed your concerns and if you have the time to provide further feedback on our rebuttal. We are more than willing to engage in further discussion.
Best regards,
Authors of paper 6586.
---
Reply to Comment 1.1.1:
Title: Further feedback
Comment: Dear Reviewer 3GaN,
Thank you for your selfless efforts. As for your previous concern about discussions with Copy-Paste (our response Q1-2), in addition to our previous analysis and qualitative comparisons, we here provide more quantitative results about Copy-Paste (object-level mixing, we use ClassMix in semantic segmentation for no bounding box information) and CutMix (random rectangle-region mixing):
| Real Only | Real + Copy-Paste | Real + CutMix | Real + Synthetic (Ours) |
|:---------:|:-----------------:|:-------------:|:-----------------------:|
| 45.8 | 45.6 (-0.2) | 45.9 (+0.1) | **48.0 (+2.2)** |
In conclusion, Copy-Paste and CutMix do not help much in the *fully-supervised semantic segmentation* task. Similar observations are also reported in [1, 2] (please refer to their provided *fully-supervised* results, the *fully-supervised* results are even downgraded after applying CutMix or ClassMix).
[1] Semi-supervised semantic segmentation needs strong, varied perturbations, In *BMVC*, 2020.
[2] ClassMix: Segmentation-based data augmentation for semi-supervised learning, In *WACV*, 2021.
---
Rebuttal Comment 1.2:
Comment: I appreciate the authors' response. My minor concerns are addressed, but I still think the strength of the proposed method comes largely from the superiority of FreestyleNet. The filtering and hard negative sampling seems trivial to me. I would keep my original rating
---
Reply to Comment 1.2.1:
Comment: Dear Reviewer 3GaN, thank you very much for your further feedback. It is glad to know part of the concerns have been addressed. As for the remaining concern about whether the strength of our method comes largely from FreestyleNet instead of our proposed two processing techniques, we would like to provide some further clarifications here.
- First, directly using the synthetic pairs from FreestyleNet *without any processing* **will not bring any improvement** to the challenging fully-supervised real-image baseline. This has been supported by Table 5 in our main paper. We borrow the results below (the real-only result is borrowed from main Table 3 with SegFormer-B4). The inferior result of blindly integrating synthetic images (FreestyleNet, -0.2 mIoU) also explains why existing works [1, 2] mostly only address the few-shot scenario, which contains a few real images and is easy to boost. In comparison, with our filtering and hardness-aware synthesis strategies, we can significantly boost the challenging but practical fully-supervised scenario (**+1.8 mIoU**).
| | Real data | Synthetic data *w/o processing* | Synthetic data *w/ processing* | validation mIoU |
:-: | :-: | :-:| :-:| :-:
| Real only | ✔ | | | 48.5 |
| FreestyleNet | ✔ | ✔ | | 48.2 (-0.3) |
| **Ours** | ✔ | | ✔ | **50.3 (+1.8)** |
- Second, we have further validated the necessity and superiority of our two strategies in Table 6 of our main paper. Results are borrowed below. We observe that, *without our proposed techniques*, scaling up the number of synthetic images *does not* yield consistent improvement, even gradually downgrading the performance. In addition, our proposed method tremendously outperforms the plain FreestyleNet which does not include any processing techniques. The performance gap can be as large as **+5.0 mIoU**.
| $N_{\max}$ (scaling ratio) | 6 | 10 | 20 |
:-: | :-: | :-:| :-:
| w/o processing (FreestyleNet) | 43.7 | 43.6 | 43.3 |
| **w/ processing (Ours)** | 47.2 | 47.7 | **48.3** |
| Superiority over FreestyleNet | +3.5 | +4.1 | **+5.0** |
- Lastly, as we provided in the [rebuttal](https://openreview.net/forum?id=XOotfgPiUF¬eId=k6Zw1ZF8r9), our hardness-aware synthesis strategy is not trivial. It is much better than online hard example mining (OHEM). We also borrow the comparisons below (44.2 *vs.* 45.4). As for your opinion that similar filtering strategies also exist in label-efficient learning, we agree with this. Indeed, this is a very common motivation. Similar practices also exist in noisy label learning. However, as we clarified in the global response, *we use similar motivations for totally different purposes*. We aim to recognize the synthesis failure cases, which is proved fundamental to the success of later perception task. The necessity of "cleaning" synthetic images is almost completely ignored in previous works [3] that utilize synthetic images to benefit perception tasks. *We believe that, drawing attention to "more effectively" learning from synthetic images, rather than always focusing on better synthesis quality, is also our contribution to the community.*
|Baseline|+ OHEM (thresh=0.7, min kept=100K)|+ Our hardness-aware synthesis
|:-:|:-:|:-:|
|44.0|44.2|**45.4**|
Please tell us if you have any further concerns about this response. We are more than willing to provide any further explanations. Thank you very much for your precious time and great contributions.
[1] He, Ruifei, et al. "Is synthetic data from generative models ready for image recognition?." *ICLR* 2023.
[2] Wu, Weijia, et al. "Diffumask: Synthesizing images with pixel-level annotations for semantic segmentation using diffusion models." *ICCV* 2023.
[3] Azizi, Shekoofeh, et al. "Synthetic data from diffusion models improves imagenet classification." *ICCV* 2023. | null | null | null | null | null | null |
TD Convergence: An Optimization Perspective | Accept (poster) | Summary: This work studies the TD learning algorithm from an optimization point of view which differs from the more classical fixed point Bellman operator point of view. The goal of the paper is to argue that this other viewpoint permits a better understanding of TD learning and a generalization of its convergence results explaining its practical success. The investigation of the known counterexample of TD divergence allows to identify the interplay of two forces that determine the convergence behavior of TD learning, namely a so called target force and an optimization force. It is shown that TD shows a convergent behavior when the optimization force dominates the target one. These insights are then instantiated for linear function approximation with square loss and beyond under strong convexity and smoothness assumptions, even under the celebrated deadly triad.
Strengths: - Understanding the behavior of the celebrated TD learning algorithm in the deadly triad setting and beyond the linear function approximation setting is an important research goal given the popularity of the algorithm and its potential impact in RL.
- Interesting insights starting from the simple counter example in Section 4 are provided and clearly explained.
- The paper is very well-written, well-organized and overall easy to follow. To the best of my knowledge, proofs (including the appendix) are correct and cleanly presented.
Weaknesses: 1. It is not made very clear that the scheme proposed is actually different from the classical TD learning which was analyzed in [3] since the iterate $\theta_t$ of the target network is frozen. The paper considers a ‘target-based’ version of TD learning which was inspired by the DQN algorithm using target networks [1].
2. Since one of the motivations of the paper is to show that the optimization point of view allows to address more general settings than the linear function approximation setting with square loss, I would expect a more detailed discussion of these cases in Section 6 giving the definitions of the $H$ function in that case and verifying the uniform assumptions 1 and 2 of Section 6. The discussion in l. 277 to 280 is quite minimal. The generalization does still seem a bit restrictive and the analysis provides sufficient conditions that do not close the question of the understanding of the divergence behavior of TD learning. See also the Questions section.
3. Although the interpretation in terms of ‘target force’ and ‘optimization force’ has not been described as such in prior work to the best of my knowledge, arguments to show the results are quite classical and technical novelty is very limited in my opinion. The proofs of section 5 rely on the classical stability criteria in control for linear systems and was also used in other works ([16], see also discussions below about related works) even if the possibility of using other distributions instead of the stationary one was not described.The proofs for Section 6 follow standard analysis of gradient descent like algorithms for smooth and strongly convex objectives up to the drifting parameter $\theta_t$ which is periodically synchronized with the online iterate.
4. **Related work**: Closely related works are not discussed in detail, especially those analyzing the ‘target-based’ version of TD learning which is central in this work.
**(a)** While the present work indeed provides some new insights, the optimization perspective proposed in this work is not completely new and I believe some additional discussion regarding this would be welcome. As a matter of fact [16] is only briefly mentioned in l. 275 whereas the optimization point of view is alluded to in the remark in Section 2.4 of [16] (see also sections 3 and 5 therein) where the modified version of the Mean Square Bellman Operator with two variables (target and online) clearly appear. While the generalization beyond linear function approximation and the squared loss (under some Lipschitzness and strong-convexity assumptions) and the flexibility to consider a different distribution from the Markov chain’s stationary distribution are interesting insights, the results of Proposition 1 and Corollary 2 are not very novel. For instance, instead of periodically synchronizing the target network with the online one as in Algorithm 2, one could also consider the online moving average update rule proposed in the popular DDPG algorithm (Lillicrap et al. 2016) which was analyzed in the linear function approximation on-policy setting in [16, sections 2, 3, 5] and in Barakat et al. 2022 (see Sections 4.2, 5.1, 6.1). The aforementioned results also provide almost convergence results and sample complexity analysis accounting for noisy settings unlike the present work.
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. Continuous control with deep reinforcement learning, ICLR 2016.
Barakat, A., Bianchi, P., Lehmann, J. Analysis of a Target-Based Actor-Critic Algorithm with Linear Function Approximation, AISTATS 2022.
**(b)** If one of the main motivations or consequences of this work is to show that the ‘optimization viewpoint’ can explain the possible convergent nature of TD learning in the presence of the deadly triad as mentioned in the conclusion (under some assumptions and some suitable choice of the sampling distribution in the off-policy case), then, the ability of the target-based updates (which is actually the ‘optimization point of view’) was also advocated for in Zhang et al. 21 [25] at least in the linear function approximation setting (see Section 4 for off-policy evaluation with Q functions which could be easily adapted to V functions).
**(c)** Further works such as Liu and Olshevsky 2021 could also be relevant to mention. This work points out that original TD learning (i.e. Eq. (2), as proposed in [6] and analyzed in [3]) can be seen as what they call a ‘gradient splitting’ (see Section 3 therein) even if it is known that the TD learning update rule does not correspond to any gradient descent over any function.
Liu, R., Olshevsky, A. Temporal Difference Learning as Gradient Splitting, ICML 2021.
**(d)** Several works have also considered the analysis of TD learning with nonlinear function approximation beyond the linear setting (see e.g., Brandfonbrener and Bruna 2020; Agazzi and Lu 2021 to name a few). A discussion about these works seems also relevant given the generalization motivation of the present work.
Brandfonbrener, D., Bruna, J. Geometric insights into the convergence of nonlinear TD learning, ICLR 2020.
Agazzi, A., Lu, J. Temporal-difference learning with nonlinear function approximation: lazy training and mean field regimes, MSML 2021.
**Minor typos:**
l. 104: ‘the root cause of TD’, of divergence?
l. 242: capital $H$ instead of $h$
l. 541-542: $\theta^{\star}$ instead of $\theta^{*}$
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: In the following, I list a few questions of which the first two are the main ones, focusing on the limit point definition and the strength of the assumptions.
1. All the theoretical results show convergence to the fixed-point $\theta^{\star}$. How is this point defined in those results? Eq. (5) mentions that if convergence happens then $\nabla_{w} H(\theta^{\star}, \theta^{\star}) = 0$. Such a characterization (which is also the fixed-point characterization of the TD solution) is used in all the proofs. Then l. 142-143 precise that ‘Whenever it exists, we refer to $\theta^{\star}$ as the fixed point of these iterative algorithms’. For the convergence results to be meaningful, the existence of the fixed point should be guaranteed. I guess you define $\theta^{\star}$ to be the fixed point of the projected Bellman operator which is indeed a contraction under some conditions but this is not very clear in the paper. However, as alluded to in the paper in l. 212-218, it is not clear whether the projected Bellman operator is still a contraction when one considers a distribution different from the stationary state distribution of the Markov Chain. What would then be the limit point(s) in the results in the case where existence is not guaranteed by the fixed point arguments relying on the operator viewpoint? Do you just suppose in that case that there exists a unique point such that $\nabla_{w} H(\theta^{\star}, \theta^{\star}) = 0$ (which is what is required and used to conduct the proofs)?
2. Concerning the assumptions and the examples provided in Section 6, input-convex neural networks [18] provide convex functions with respect to the inputs and not with respect to their weights (parameters). How these would help to guarantee that the strong convexity assumption 2 holds? Satisfying both assumptions does not seem straightforward. The constants $F_{\theta}$ and $F_{\omega}$ are uniform constants over $\theta$ and $w$ that are not easy to define and compute in practice which makes the core stability condition $F_{\theta} < F_{w}$ difficult to verify. I understand though that the focus of the paper is theoretical. Even in the linear setting and assuming that the feature matrix is given and known, can we for example suggest some distributions beyond the stationary state distribution for which the condition holds?
3. Regarding section 4, when you mention that the counter example was identified by [3], are you referring to Section IX in Tsitsiklis and Van Roy 97 (TAC) for this? Is it a simplified/modified example of that one?
4. Could you mention the stationary state distribution of the Markov chain for the counter example? This could be added if relevant to show that this stationary distribution is indeed valid and leads to a convergent behavior as expected.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: Beyond the points raised above, one limitation that is not clearly mentioned is that the convergence results are limited to the deterministic setting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing that we have tackled an important research goal, for stating that our paper is insightful, clearly-written and well-organized, and also for your diligence in checking our proofs.
In terms of weakness 1, we will emphasize better that one of the major contributions of our paper is to show convergence of TD under a frozen target network ($K>1$). In fact, our results include the special case (with $K=1$), but also bridges the gap with practice where usually a much larger value of $K$ is used. We will put more effort to emphasize this.
In terms of discussing examples that satisfy our assumptions, Please see our detailed discussion in the general rebuttal. We will add this to the paper.
In terms of weakness 3, the novelty of our results, we respectfully disagree that the proof techniques are standard. In fact, by following the standard proof techniques of gradient descent (which Lee and He 2019 leveraged) they end up with an error term that accumulates over iterations, and so exact convergence to TD fixed-point cannot be shown with a finite $K$. Using proof techniques that are new to the RL literature, we showed for the first time that irregardless of the value of $K$, and under the natural extension of quadratic loss, contraction can be shown, and convergence to exactly the TD fixed-point can be concluded. To the best of our knowledge, this is a quite novel and significant.
As for Section 5 about linear approximatiors and squared loss, our intention was not to say that we are very novel in exploring this setting (we provided credit to van Roy Tsitsiklis and other influential papers). Our intention was merely to formalize our intuitions from the counter example, before moving to the general setting in Section 6 and present our main novel result. We are happy to clarify this in the paper.
In terms of related work, we agree that more discussion will strengthen the paper. We extensively discussed relationship to two of your suggested papers in our general rebuttal and in response to reviewer 16Sb. Here, we discuss a third paper you mentioned and defer further discussion to the paper due to space limits. In the case of gradient splitting work of Liu and Olshevsky. Their contribution is to introduce the notion that the update of the TD algorithm with $K=1$ could be thought as gradient splitting. Leveraging this insight, and by adding a projection step to TD, they improved the sample complexity bound of Bhandari et al (which we have cited in the submission). The key difference with our case is that we are interested in vanilla TD convergence under general $K$ and also that we have extended the quadratic loss to the somewhat more general case of strongly-convex functions. Nevertheless, we found this work quite interesting and will add this to the paper. Thanks.
Please rest assured that based on our investigation the other papers you mentioned, while quite innovative, do not undermine the novelty of our work. We will cite and discuss them, and we hope the reviewer considers increasing their score in light of our better situating our paper.
Regarding the existence of a fixed point, we want to make a distinction between existence of a fixed-point and the fact that a certain operator is a contraction. For example, in the counter example a fixed-point always exists (namely $\theta^{\star}=0$), for which $\nabla_{w} H(\theta^{\star},\theta^{\star})={\bf 0}$. This is despite the fact that for some distributions TD might be divergent. So, the existence of a fixed-point is a property of the problem, not the algorithm that attempts to find the solution to that problem.
More generally, in the linear case with quadratic loss we can show that a fixed-point always exists under very mild assumptions. In particular, from $\nabla_{w} H(\theta^{\star},\theta^{\star})=0$ we get:
$\Phi^{\top} D(R+\gamma P\Phi\theta^{\star} - \Phi\theta^{\star})=0,$
meaning:
$\theta^{\star} = \big(\Phi^{\top}(I - \gamma P)\Phi\big)^{-1}\Phi^{\top} D R.$
$P$ is a stochastic matrix, so $(I-\gamma P)\succ 0$, therefore the inverse always exists and the problem always has a unique fixed-point. Some algorithms might diverge in their attempt to find the fixed point, but that does not change the fact that it exists.
More generally it is an open question whether there exists a fixed-point with alternative $H$. We generally need additional information to answer this question. In the optimization literature determining if a problem has a solution is a vibrant research area. However, this is beyond the scope of this paper, and we would love to explore this direction once this paper gets accepted.
To clarify our pointer to input-convex neural nets, the reviewer is correct in saying that the original paper focused on the convexity of the net with respect to the input given fixed weights. That said, the same network architecture the paper proposed is also convex with respect to the weights given a fixed input. To see this, consider the class of functions $f(x,\theta)=ReLU(x^{\top}\theta)$. Notice that this function is both a) convex with respect to $x$ given fixed $\theta$, and b) convex with respect to $\theta$ given fixed $x$. Thank you for your astute question, and we will clarify this in the paper.
Regarding the counter example, we took it from the standard textbook of Sutton and Barto. In their Example 11.1, Sutton and Barto credit it to van Roy and Tsitsiklis and we followed the standard they set.
Regarding the stationary-state of the example, the distribution depends on the parameter $\varepsilon$ used to define the transition model. In particular, it assigns $\frac{\varepsilon}{1+\varepsilon}$ to the first state, and $\frac{1}{1+\varepsilon}$ to the second state. We add this to the paper.
You are right about the deterministic limitation. To clarify, we support stochastic transition matrices, but that the gradient computation needs to be exact (deterministic). We will add this limitation.
---
Rebuttal Comment 1.1:
Title: post rebuttal
Comment:
I thank the authors for their detailed responses which answered all my questions, especially for additional comments regarding examples when the Lipschitzness and strong convexity assumptions are satisfied as well as the contextualization of the work within the recent literature. I believe some clarifications provided by the rebuttal are worth adding to the paper. Although I still think the setting is a bit restrictive as a generalization beyond the standard linear function approximation setting (in the deterministic case), I raise my score to 6 given the authors’ rebuttal and the comments regarding off-policy/on-policy convergence.
Regarding the existence of a fixed point, the sense of my comment was about conditions to ensure that such a point that is central to the results and their proofs exists in the more general setting which concerns this work. It is known that a fixed point exists in the linear case with quadratic loss (namely the standard TD solution in the linear function approximation setting) and that this point can be reached since the Bellman contraction allows to design even stochastic approximation algorithms (namely TD learning) to reach this point. I would have expected to see at least an example beyond the known more standard case where this fixed point is guaranteed to exist. My current understanding is that you suppose that there exists such point throughout the paper, I understand though that this may be treated on a case by case basis depending on the specific $H$.
On a more technical point concerning the input-convex neural nets, the ReLu example provided in the response introduces some nonsmoothness which does not seem to be strictly covered by the assumptions (maybe some smoothing preserving convexity could address this technical point though).
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: We are delighted to hear that the reviewer found the discussions helpful, and we will be sure to add these discussions to the paper. We also do believe that the discussion phase truly made our paper much stronger. While the strong convexity assumption is a natural extension of the existing literature, we also do agree that it is not the most general setting. We hope that the publication of this paper could instigate further research in this direction to further generalize TD convergence using the optimization point of view.
Note that in providing the ReLU example our intention was to primarily argue that an input-convex neural network is also convex with respect to the weights. The reviewer is correct in stating that the ReLU activation, even though it is Lipschitz, may still be problematic because the gradient is undefined at 0. In this case we need to resort to smoothing, as you mentioned, or use alternative activation functions such as softplus.
We highly appreciate the raised score! | Summary: This paper studies convergence of the TD algorithm from the perspective of solving a shifting optimization problem. Through a classic failure case, the authors uncover two forces, whose interplay reveals TD's convergence properties. These two forces both depend on the state visitation distribution, the state features, and the transition kernel. The authors point out that while the stationary-state distribution ensures convergence, this does not mean no other state visitation distributions can and seek to establish sufficient properties. They generalize their analysis to TD error defined with more general functions and provide these sufficient conditions. Assuming the TD error $H$ has a value function gradient that is Lipschitz in the target function parameters and is also strongly convex in the value function parameters, a more general convergence criterion can be derived. The authors extend this result to the setting where the shifting optimization problem is only solved approximately at each step using $K$ gradient updates.
Strengths: I found this paper to be very clearly written. To my knowledge, the claim in Proposition 1 and subsequent results are novel and make significant progress towards understanding convergence of the TD algorithm.
Weaknesses: The paper does a great job explaining and motivating the novel TD convergence analysis, and I believe it can stand on its own as a "theory paper". But I think experimental support for the approach in a complex (e.g., deep RL) setting could have made the paper much stronger.
I am also not entirely clear which results are novel and can be attributed to the authors vs which were known beforehand. Equation (7) does not appear to be novel; some form exists in [35], sec 4. The paper could benefit from better signposting to explain where others got stuck and this paper advances.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - line 164: where does the $1/2$ come from in the equation? If that is there for convenience, can you add a $1/2$ to the def of $H$ under line 117?
- line 237: $M_w$ is positive definite if $\Phi$ is full rank **AND** $D$ is full-rank, i.e., every state has positive probability under $d(s)$, no?
- line 320: $L$ here is typically known as the strong-smoothness parameter, no? In contrast to the strong-convexity parameter.
- line 330: the condition number I'm familiar with (see [1]) is always greater than or equal to $1$ (max eig / min eig). In this case, I think you mean the "inverse condition number". Also, this technically means, $\sigma_K^2 = 1$ when $\kappa=1$, in which case, the analysis does not imply convergence. Can you please discuss this corner case?
- line 370: "exits" --> "exists"
[1] Guille-Escuret, Charles, et al. "A study of condition numbers for first-order optimization." International Conference on Artificial Intelligence and Statistics. PMLR, 2021.
http://proceedings.mlr.press/v130/guille-escuret21a/guille-escuret21a.pdf
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: I see no need for discussion of negative societal impact in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's carefully reading our paper as well as the overall quite positive assessment. Thanks for pointing out that the paper is clearly written and also thanks for recognizing the novelty of our work in terms of extending TD convergence proof to general $K$, as well as extending the function class from quadratic to other alternatives.
- The paper does a great job explaining and motivating the novel TD convergence analysis, and I believe it can stand on its own as a "theory paper". But I think experimental support for the approach in a complex (e.g., deep RL) setting could have made the paper much stronger.
In deriving a generalized proof for TD, our motivation was to be able to explain the remarkable empirical success of TD for general $K$ and beyond the linear function approximators and in settings such as deep RL. We liked our theory to complement the existing evidence on the solid empirical performance of TD in the literature, and to provide further theoretical evidence that TD is a sound algorithm in a broader setting than understood in previous work.
- I am also not entirely clear which results are novel and can be attributed to the authors vs which were known beforehand
Thanks for highlighting this. We would like to summarize here the two main contributions of our paper, which we will better highlight in the main paper as well:
1- To the best of our knowledge, we are the first paper to show the contraction result of TD with general value of $K$. We show contraction with convergence to exactly the TD fixed point. In this space, closest to our work is that of Lee and He (2019), which could only show that TD with general $K$ converges to a region around the fixed-point. The part where they got stuck is where they use the more classical analysis that gradient descent only approximately solves each iteration, and so they need to accumulate some error along the way. In contrast, we can show that even approximately solving each iteration (corresponding to finite $K$) is enough to obtain contraction (without any error term). Even though Lee and He are correct in saying that using a finite $K$ results in approximately solving each iteration, we can still show that each iteration remains a contraction by looking at the net effect of the updates to the online network and the single update to the target network. We prove that the net effect of these updates is one that ensures that the iterate makes steady progress towards the unique fixed-point irregardless of $K$.
2- To the best of our knowledge, we are also the first paper to show TD convergence in the most natural extension of quadratic functions, namely the strongly convex case. This allows us to argue that slight modifications of TD in terms of loss functions and function approximators are also sound so long as they satisfy our assumptions. We are unaware of any previous work that tackled this extension.
- where does the $\frac{1}{2}$ come from in the equation?
Yes, you are correct that this was added for convenience and we should also add it in line 117. Thanks for catching the missing $\frac{1}{2}$.
- $M_{w}$ is positive definite if $\Phi$ is full rank AND $D$ is full-rank, i.e., every state has positive probability under, no?
You are correct, and we will clarify this in the paper. Thanks for your diligence.
- $L$ here is typically known as the strong-smoothness parameter, no? In contrast to the strong-convexity parameter.
We think the reviewer means ``global'' Lipschitz continuous, so yes you are correct, and we will clarify this.
- the condition number I'm familiar with (see [1]) is always greater than or equal to (max eig / min eig). In this case, I think you mean the "inverse condition number". Also, this technically means, $\sigma_K^2=1$, when , $\kappa=1$ in which case, the analysis does not imply convergence. Can you please discuss this corner case?
You are right in saying that we mean the inverse condition number here. In terms of the corner case, notice that if $\kappa=1$ (which means we have a quadratic dependence to the online parameter), then $\sigma_k=\eta$, which given the assumption is less than one. Therefore, this is a contraction and convergence will be guaranteed.
- line 370: "exits" --> "exists"
We will fix it. Thanks!
---
Rebuttal Comment 1.1:
Title: Acknowledgement of Rebuttal
Comment: Dear authors, thank you for your rebuttal. You have addressed my concerns. I understand your point of supporting the performance of TD with general $K$ with citations rather than your own experiments (although reproducing others results never hurts). I also see that your analysis in the case where gradient descent is run for $K$ steps is a key contribution. I have read through the other reviews and see that you have emphasized this point in your rebuttals to them as well. I will continue to follow those dialogues, but for now, I maintain my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We appreciate your carefully reading our paper, and your engagement with our rebuttal. We also thank you for the continued support.
If there was any lingering or new question, please do not hesitate to bring it into our attention. | Summary: The paper studies TD-learning with target network update. The authors recast the TD-learning algorithm into a time-varying optimization problem. The authors proves convergence for a function class with strong convexity and smoothness.
Strengths: The paper is easy to follow and the motivation of the work is well explained by a simple example form $\theta\to2\theta$. Moreover, the viewpoint of target force and optimization force seems to novel viewpoint, and the theoretical result seems to be solid.
Weaknesses: 1. Assuming strong convexity and lipschitzness is too restrictive to argue for a general function class. Moreover, regarding the condition $F_{\theta}<F_{w}$ in Theorem 3, I believe this is the key condition for the convergence but the discussion seems to be missing whether it is a common condition to be met or not.
2. The analysis on iterative optimization objective for tabular and linear case has been studied in Lee et al. and Zhang et al.. The comparison with the existing work seems to be insufficient.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How strict is the condition $F_{\theta}<F_{w}$ in Theorem 3? Can we find any examples other than tabular or linear setting to show the convergence under general setting?
2. Can we also find conditions for to ensure the convergence for the Baird example?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 1 poor
Limitations: In summary, the impact of theoretical result is not sufficient for the following reasons:
- Assumption on strong convexity is too strong.
- There are no examples or experiments showing convergence of general function class other than tabular or linear setting.
- There are no compariosn between the existing works Lee et al., and Chen et al..
Hence, I am leaning towards rejection as for now.
Lee, Donghwan, and Niao He. "Target-based temporal-difference learning." International Conference on Machine Learning. PMLR, 2019.
Chen, Zaiwei, John Paul Clarke, and Siva Theja Maguluri. "Target Network and Truncation Overcome The Deadly Triad in $ Q $-Learning." arXiv preprint arXiv:2203.02628 (2022).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review. In what follows, we address the particular weaknesses and questions raised.
- A discussion seems to be missing whether it is a common condition to be met or not.
Please see our detailed discussion in the general comment part.
- The analysis on iterative optimization objective for tabular and linear case has been studied in Lee et al. and Zhang et al.. The comparison with the existing work seems to be insufficient.
We answered this question in depth in our general comment, but we are happy to distill our point here so we better situate our result in comparison to these two papers mentioned by the reviewer. Starting by Lee and He (2019), note that while we were able to show TD convergence, Lee and He can only guarantee that TD will find a solution in a region around the fixed-point because with finite $K$ the analysis needs to account for errors that are accumulated in each iteration. Another way to think about their result is that they can only show contraction if one uses gradient descent with infinite $K$. With finite $K$, they need to account for errors in solving each iteration (this is denoted by $\epsilon_k$ in their proofs such as in Theorem 3). To better make the point, suppose that we solve the first iteration with some error, meaning that we approximately solve:
$\theta^{1} \approx \arg\min_{w} H(\theta^0,w),$
but then after we solve each iteration perfectly, meaning for $i\geq 1$:
$\theta^{i+1} = \arg\min_{w} H(\theta^i,w).$
Clearly, one can think of approximately solving the first iteration as initializing TD to a point different than $\theta^0$ and then doing perfect optimization in all iterations. The fact that we then can solve each subsequent iteration perfectly should give us exactly a contraction to the TD fixed-point based on our analysis. However, in this case the result of Lee and He (2019) can still only support convergence to a neighborhood characterized by the approximation error $\varepsilon$. In contrast, in this case we can guarantee convergence exactly to the TD fixed-point.
Moreover, in light of the more practical literature on TD where a finite $K$ is used, we were interested to show that 1) we converge to the TD fixed-point exactly and 2) with any value of $K$ the TD algorithm would give us a contraction. We indeed show in our main result that TD is a contraction with any value of $K$ (which we believe we are the first paper to show). More concretely, we show that smaller values of $K$ can damage the contraction factor, but each iteration will nevertheless remain contractive irregardless of $K$.
In terms of comparison with Zhang et al. (2021) and Chen et al. (2022), notice that due to the difficulties pertaining to proving convergence for vanilla TD, a line of existing research was to equip TD with modifications to make it more conducive to convergence. In this case, Zhang et al. (2021) introduced two projection steps that are crucial for obtaining convergence, and similarly Chen et al. (2022) studies the case where a truncation step is added. These are very important techniques, and they indeed can make TD more convergent. However, our convergence result does not lean on these projection and truncation steps and is applicable to vanilla TD as well as TD with alternative loss functions and function approximators.
- Can we find any examples other than tabular or linear setting to show the convergence under general setting?
Please see our detailed discussion in the general comment part for more examples.
- Assumption on strong convexity is too strong.
Notice that the existing literature, including all the three papers mentioned by the reviewer, revolve around the linear function approximation case with quadratic loss function. This is just a special case of having a strongly-convex function. So, to the best of our knowledge, we are showing TD convergence in a setting that is less stringent and more general than the previous work. We would like to highlight that one can only hope to understand the more difficult cases (such as convex, weakly convex, and non-convex) by first understanding the most natural extension of the quadratic loss, namely the strongly-convex case. This was absent in existing work and our goal was to fill this gap.
- There are no examples or experiments showing convergence of general function class other than tabular or linear setting.
Again, please see our detailed discussion in the general comment part for more examples.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. My main concerns regarding the comparison with existing works have been mostly addressed. However, I have still have some remaining concerns regarding the condition $F_{\theta}<F_w$. This condition seems to be quite related to the assumption, equation (7) in Thereom 1 in Melo et.al.. That is, the condition $F_{\theta}<F_w$ can be met when we have certain strong assumptions on behavior policy and target policy. Besides, the condition (7) in Thereom 1 in Melo et.al., has factor of $\gamma^2$ whereas the proof in the attached rebuttal requires $\gamma$ , which implies stricter condition.
Melo, Francisco S., Sean P. Meyn, and M. Isabel Ribeiro. "An analysis of reinforcement learning with function approximation." Proceedings of the 25th international conference on Machine learning. 2008.
---
Reply to Comment 1.1.1:
Title: Situating Our Paper Relative to Melo et al.
Comment: We again thank the reviewer for carefully reading our paper. We highly appreciate the reviewer's engagement with the clarifications we made in our rebuttal.
Thanks for bringing the work of Melo et al (2008) into our attention. While we did cite this paper in our original submission (reference [22]), we agree that a more nuanced discussion about this important paper is warranted. We distill the key differences here, and in particular we discuss the similarity between their condition (7) and our condition. This discussion will be added to the paper.
First, we understand the key contribution of Melo et al (2008) to be the extension of the ODE proof of Tsitsiklis and Van Roy (1997) from TD prediction to the more general control setting with Q-learning. Melo et al. show that asymptotically and under mild assumptions, the Q-learning algorithm with linear function approximation, quadratic loss, and $K=1$ converges to a fixed-point.
In this context, and to answer the reviewer's specific question, the condition in their equation (7) is in fact quite different than the condition identified in our paper. In their equation (7), they require that the eigen values of $\Sigma_{\pi}$, the policy-conditioned covariance matrix $\Phi^{\top} D\Phi$, dominate the eigen values of a second matrix $\gamma^{2} {\Sigma_{\pi}^{\star}}(\theta)$. Here $\Sigma^{\star}_\pi(\theta)$ is a similar covariance matrix for features, but one that is computed based on the action-greedification step.
While this condition may look similar, a deeper investigation reveals that it is completely different from our condition. To recap, our condition requires that the optimization force (namely the same covariance matrix of the features) dominate the target force. In this case, the target force is defined as the degree to which the objective function $H$ can be affected by the target network variable. In the linear case, it depends on the eigen values of the matrix $\gamma \Phi^{\top} D P \Phi$. Notice that this target force is governed, in part, by the transition matrix $P$. This is a major distinction between our condition and that of Melo et al. whose condition does not depend on $P$, and intuitively, their condition is more related to choosing the feature matrix $\Phi$ in such a way as to contain the negative effects due to maximization. To conclude, because the conditions are inherently different, the $\gamma^2$ factor in their result cannot be thought of as offering any particular advantage relative to our result.
Moving from the condition itself, a major difference is present in the final guarantees obtained by us relative to Melo et al. In our case, we are interested in finding a finite-time contraction result, meaning that after each outer iteration $t$, we would like to provide a guarantee on the quality of the iterate we find relative to the TD fixed-point, whereas Melo et al. showed the somewhat weaker asymptotic convergence.
Moreover, our results are better grounded in the empirical side of the RL literature in two important ways, namely 1) our results support the now ubiquitous deep RL practice of freezing the target network (which corresponds to the case of $K>1$) whereas Melo et al. study the case of $K=1$, and 2) we generalized the function class beyond the case of just linear functions and quadratic loss, and support a more abstract function class $H(\theta,w)$. This abstract class includes the setting studied by the important work of Melo et al. as a special case, but our setting also includes other interesting examples as explained in our general rebuttal.
We are delighted to see that our rebuttal already addressed your main concerns, and we hope you find the new discussion on Melo et al. useful as well. We really hope that you take the addressed concerns into account and raise your score to support our paper. | Summary: The paper studies the convergence conditions for Temporal Difference (TD) learning utilizing a target network, and further extends its findings to scenarios where TD minimizes alternative losses beyond mean square errors. The study is conducted by formulating TD updates as iterative optimizations, under the help of a target network. Notably, the paper provides an intuitive description of the coefficients before the student's TD network parameter and the target network parameter as "optimization force" and "target force," respectively. The paper reaches the conclusion that when optimization force dominants, the algorithm converges.
Strengths: (1) The paper is of high quality and clarity. The authors have provided a clear setting, accompanied by well-presented proofs and well-defined assumptions. The demonstration of the counterexample is worth mentioning, as it effectively and intuitively introduces the main idea of the paper.
(2) The paper extends current studies of TD to cases where alternative losses, such as Huber loss, are used, closing a gap between the theory and practical algorithms.
Weaknesses: (1) The paper focuses on Markov Reward Process, which is not a common setting for TD convergence proof. Why authors did not focus on expected updates? Could authors provide more reasoning for their choice?
(2) The paper re-forms TD updates as iteration optimizations with the help of a target network. Could authors add more comparisons to the convergence results of TD with a target network? For example,
Breaking the Deadly Triad with a Target Network, Zhang et al. (2021)
Target-Based Temporal-Difference Learning, Lee and He (2019)
(3) Some empirical results to show that the empirical contraction factor aligns with their theory findings would be great.
(4) The paper proposes an interesting point: for some safe state distribution, TD converges in the off-policy case. Could authors provide the analytical form of safe distributions? Also, how should we compute these distributions in practice?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: (1) Do Huber loss, logistic loss and entropy loss satisfy all three assumption stated in Section 6 for inexact approximation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper did not discuss the limitations. Some discussions on an extension to stochastic updates and non-linear settings would be fascinating.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time spent carefully reviewing the paper and for appreciating our work. Please find below some clarification regarding your questions.
- The paper focuses on Markov Reward Process, which is not a common setting for TD convergence proof. Why authors did not focus on expected updates? Could authors provide more reasoning for their choice?
Thank you for highlighting this question. In our case, we just followed the MRP setting studied in Chapter 11 of Sutton and Barto (specifically Example 11.1). Having examined our results based on your question, we found that from the optimization point of view, studying TD in this MRP setting is identical to the setting where we look at the expected TD update. Therefore, our results can be framed in the expected setting and we are happy to add this framing to the paper.
- Could authors add more comparisons to the convergence results of TD with a target network? For example, Breaking the Deadly Triad with a Target Network, Zhang et al. (2021) Target-Based Temporal-Difference Learning, Lee and He (2019)
Sure. We start by better situating our paper with respect to Lee and He (2019). While previous work primarily focused on $K=1$ (the setting of changing the target network after each update to the online network) the work of Lee and He (2019) was, to the best of our knowledge, the first paper that tackled general $K$. One limitation of their result is that they could only show that TD will find a solution in a region around the fixed-point because with finite $K$ the analysis needs to account for errors that are accumulated in each iteration. Another way to think about their result is that they can only show contraction if one uses gradient descent with infinite $K$ (which corresponds to exactly solving each iteration). With finite $K$, they need to account for errors in solving each iteration (denoted by $\epsilon_k$ in their proofs such as in Theorem 3), which prevented them from obtaining an exact convergence result.
In contrast, in light of the more practical literature on TD where a finite $K$ is used, we were interested to show that 1) we converge to the TD fixed-point exactly and 2) with any finite value of $K$, the TD algorithm would give us a contraction. We indeed show in our main result that TD is a contraction with any value of $K$, which we believe we are the first paper to show this kind of result in the RL literature. More concretely, we show that smaller values of $K$ can damage the contraction factor, but each iteration will nevertheless remain contractive irregardless of $K$.
In terms of comparison with Zhang et al. (2021), notice that due to the difficulties pertaining to proving convergence for vanilla TD, a line of existing research was to equip TD with modifications to make it more conducive to convergence. In this case, Zhang et al. (2021) introduced two projection steps that are crucial for obtaining convergence. However, we were able to obtain a general result showing that vanilla TD converges with any value of $K$, and it also supports TD convergence in a broader setting.
- Some empirical results to show that the empirical contraction factor aligns with their theory findings would be great.
In deriving a generalized proof for TD, our motivation was to be able to explain the remarkable empirical success of TD with general $K$ and beyond linear function approximators. We liked our theory to complement the existing evidence on the solid empirical performance of TD in the literature, and to provide further theoretical evidence that TD is a sound algorithm in a broader setting than understood in previous work.
- The paper proposes an interesting point: for some safe state distribution, TD converges in the off-policy case. Could authors provide the analytical form of safe distributions? Also, how should we compute these distributions in practice?
This is in fact a very important open question. We note that the fact that we are now interested in this open question is an insight that is driven from our optimization perspective. We believe that highlighting this open question, as noticed by the reviewer, is in fact a merit and not a weakness of our work. We believe that this question should be investigated deeply in future work, and that while quite important, remains outside of the scope of our paper.
- Do Huber loss, logistic loss and entropy loss satisfy all three assumptions stated in Section 6 for inexact approximation?
Please see our detailed discussion in the general comment part.
- Some discussions on an extension to stochastic updates and non-linear settings would be fascinating
Again, please see our detailed discussion in the general comment part about non-linear examples. Also, based on our preliminary investigation our theory can be extended to the case with stochastic gradients, and so publishing this paper will open the gate for a more thorough investigation of this case and beyond.
---
Rebuttal 2:
Title: Thanks for the detailed explanations!
Comment: Most of my questions are answered, especially my concerns on comparison to other target-based papers and showing examples satisfying the assumptions. Meanwhile, the discussion with Reviewer gJiK on when $F_{\theta} < F_{\omega}$ is inspiring. It would be great if the condition can be presented more straightforwardly, for example, directly stating the conditions on the feature matrix and state distribution.
I have raised my score from 5 to 7.
---
Rebuttal Comment 2.1:
Title: Thanks
Comment: We are delighted to see that the reviewer found it helpful to read our discussion on 1- comparison to other target-based papers 2- satisfying assumptions and 3- off-policy distributions that might achieve better contraction factors. We will add these discussion to the paper, and will also present them in a more straightforward manner.
Thanks for your continued support, and for raising your score. We highly appreciate it! | Rebuttal 1:
Rebuttal: We appreciate the thoughtful feedback provided to us by our reviewers. All reviewers agreed that our results are clearly articulated. Notably, Reviewer 16Sb believes that the paper is of high quality and is accompanied by well-presented proofs. Also, Reviewer fasm confirms that our results are novel and make significant progress towards a better understanding of TD convergence.
In light of the reviews, we realized that we could have done a better job in articulating the major contributions of our work. We reiterate here that our paper has made two significant advancements in terms of generalizing existing results on TD convergence:
1- We believe to be the first paper to show a contraction for TD with frozen target network and general $K$. To elaborate further, to the best of our knowledge, existing results prior to Lee and He (2019) mainly considered the case where we either never freeze the target network (corresponding to the value of $K=1$), or the somewhat unrealistic case where we can exactly solve each iteration. Lee and He (2019) showed guarantees pertaining to the more general case of finite $K>1$, but notice that, while their result is quite innovative, they leaned on the more standard optimization tools for ensuring that gradient descent with a fixed $K$ can only solve each iteration approximately. Therefore, each iteration results in some error. In their theory this error is accumulated per iteration and needs to be accounted for in the final result. Therefore, they fell short of showing 1) contraction and 2) exact convergence to the TD fixed-point, and only show that the final iterate is in the vicinity of the fixed-point defined by the amount of error accumulated over the trajectory.
In contrast, we actually proved exact convergence to the TD fixed-point by showing that each iteration is indeed a contraction irregardless of the value of $K$. Even though Lee and He are correct in saying that using a finite $K$ results in approximately solving each iteration, we can still show that each iteration remains a contraction by looking at the net effect of the $K$ updates to the online network and the single update to the target network. We prove that the net effect of these updates is one that ensures that the iterate makes steady progress towards the unique fixed-point irregardless of $K$. To the best of our knowledge, this result is completely novel, and takes a major step in supporting the soundness of TD as well as the successor algorithms (such as DQN) that use a frozen target network.
2- We believe to be the first paper to show convergence of TD in the most natural extension of the quadratic objective. To ensure that this extension is possible, we made two additional assumptions, namely the Lipschitz continuity of the objective with respect to the target network, and the strong convexity of the objective with respect to the online network. While these assumptions hold in the linear case with quadratic loss, some of the reviewers asked us to elaborate further in terms of the validity of these assumptions for mainstream loss functions and function approximators.
To this end, we present two families of loss functions where our assumptions can hold easily. In particular, to explain the first family, recall that TD could be thought of as solving for a sequence of optimization problems as follows:
$\theta^{t+1} \leftarrow \arg\min_{w} H(\theta^{t},w).$
Now suppose we can write the function $H(\theta,w)$ as the sum of two separate functions $H(\theta,w) = G(\theta, w ) + L(w)$, where the function $L(w)$ is strongly convex with respect to $w$. This setting is akin to using ridge regularization which is quite common in deep learning (for example AdamW). This allows us to now work with functions $G$ that are only convex (in fact they technically can be weakly convex) with respect to $w$. We provide two examples:
1- Suppose we would like to stick with the linear function approximation architecture. Then, the function $G$ could be constructed using any convex loss where $\nabla_w G(\theta,w)$ is Lipschitz-continuous with respect to $\theta$. Examples that satisfy this include the Logistic loss or the Huber loss.
2- Suppose we want to use the more powerful convex neural networks. We need the loss function to be convex and monotonically increasing so that the resultant function $G$ is still convex. This is due to the classical result on the composition of convex operators. One example is the quadratic loss where we restrict the output of the function approximator to positive values. Such neural nets are also Lipschitz continuous given proper activation functions such as ReLU.
Beyond this family, since the submission we have identified a second family, namely the control setting (beyond prediction) where a greedification operator is needed for bootstrapping. For example, with the quadratic loss we could have:
$H(\theta,w)=\frac{1}{2}\sum_sd(s)\sum_a\pi(a|s)(E_{s'}[r+ \gamma\max_{a'}q(s',a',\theta)]-q(s,a,w))^2.$
We again need the two assumptions, namely strong-convexity with respect to $w$ and Lipschitzness of $\nabla_w H(\theta,w)$ with respect to $\theta$ to hold. Actually, Lee and He (2020) already showed the strong convexity of the objective with respect to $w$, but we need to still show the Lipschitz property of $\nabla_w H(\theta,w)$ with respect to $\theta$. Note that Lee and He (2020) showed the Lipschitz property only with respect to $w$ and not with respect to $\theta$. We are now able to show this result. Please see the proof in the pdf attached to this rebuttal. Our proof also supports other greedification operators, such as softmax, so long as these operators are non-expansive. We will add this result to the paper. So, together, this would be another example of the kind of loss functions of the form $H(\theta,w)$ that satisfy our assumptions.
Lee and He ``Target-based Q-learning", 2020.
Pdf: /pdf/cb142bd076e7a955f041fc0b84a6eaf00ac6cb42.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
CosNet: A Generalized Spectral Kernel Network | Accept (poster) | Summary: The authors propose a complex valued neural architecture, composed of two modules: a Spectral kernel mapping generalization module and a Complex-valued spectral kernel embedding module.
They provide a generalization error bound for their model. In addition they propose a novel initialization scheme and provide experimental results on several datasets and learning tasks.
Strengths: The proposed approach seem to be novel, and the experimental results are promising.
Weaknesses: In my view, the main weakness of the manuscript is the presentation. While the approach itself seems sound, I find the presentation too poor for a conference like Neurips. Therefore I encourage the authors to re-write the manuscript and make sure it reads much better.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: No
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q: In my view, the main weakness of the manuscript is the presentation. While the approach itself seems sound, I find the presentation too poor for a conference like Neurips. Therefore I encourage the authors to re-write the manuscript and make sure it reads much better.**
**Response:**
We sincerely appreciate your time and effort in reviewing our manuscript. In response to your comment regarding the presentation of our manuscript, we would like to bring to your attention that reviewers gGnJ and BAqh have both commended the contributions of our CosNet. They found our methodology and approach to be well-explained and insightful. This suggests that our efforts to enhance the clarity and presentation of our work have been positively received by these reviewers.
Nonetheless, we understand the importance of ensuring the highest level of clarity and coherence throughout the manuscript. We value your more constructive feedback and are dedicated to refining our manuscript based on your comments. We eagerly anticipate the opportunity to incorporate more of your insights into our revision process.
---
Rebuttal Comment 1.1:
Title: Acknowledging read of the rebuttal
Comment: I thank the authors for their rebuttal and effort.
I will leave my rating as-is.
Yet, indeed it seems that two of the other reviewers find the manuscript clearer than I did. I will leave it to the AC to take it further.
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: Thanks for your reply. | Summary: The paper proposes a new framework called Complex-valued spectral kernel network (CosNet) that generalizes the spectral kernel to include complex-valued representation. The proposed framework improves the representational capability of the spectral kernel and outperforms existing kernel methods and complex-valued neural networks. An initialization scheme for the complex-valued weight matrix is proposed, which ensures that CosNet retains the property of non-stationary spectral kernels and takes the relative distance of data in the complex number domain without increasing the number of parameters. The paper provides the lower generalization bound of CosNet than the real-valued non-stationary spectral kernel. The experiments demonstrate that CosNet performs better than the mainstream kernel methods and complex-valued neural networks in time-sequential data analysis.
Strengths: 1. The paper proposes a new framework (CosNet) and provides a theoretical analysis of it.
2. The experiments demonstrate that CosNet outperforms existing kernel methods in time-series data analysis, which shows the practical significance of the proposed framework.
3. The writing of the paper is good.
Weaknesses: 1. The experiments presented in the paper lack sufficient evidence to convincingly demonstrate the properties of the model.
2. The dataset used in the experiments is limited in diversity, which raises concerns about the generalizability of the findings.
3. The initialization of CosNet's parameters varies across different datasets, and a unified initialization method is needed for consistency and reproducibility.
Typo: In line 168, the imaginary part should also be multiplied by the weight $1/\sqrt{\frac{1}{4M}}$.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The paper presents experiments comparing the accuracy of CosNet to other methods with similar network size on the same dataset, highlighting the capacity gain from introducing the imaginary part. However, I believe that accuracy alone may not fully demonstrate the improvement of the model's representation capacity. Could the authors provide more evidence of CosNet's capacity gain from other aspects, such as compression and recovery of more complex datasets?
2. The experiments in the paper adopt time-series datasets with low complexity. Could the authors provide more results that demonstrate the performance of CosNet on tasks such as image encoding and decoding or the signal processing in real-world scenarios, since these datasets may also be suitable cases for complex-valued spectral kernel modeling.
3. In Figure 2, the authors only present the visualization results of two other methods for predicting and displaying the original time series curve. Could the authors provide additional visualizations of the performance of other comparable methods for comparison? This would provide a more comprehensive and fair comparison of the proposed method against existing ones.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The initialization of CosNet's parameters varies across different datasets, and a unified initialization method is needed for consistency and reproducibility.**
**Response:** To ensure the reproducibility of our experimental findings, we unify the hyper-parameters, and the partial updated results (under the same learning rate (0.01), initialization (p = 0.01), and layer numbers (5)) are reported in the following Table. The new results show that our CosNet performs better than the baseline methods, and all the related results will be reported in the revised main paper.
| Dataset | SRFF | DSKN | \(DCN^1\) | \(DCN^2\) | ASKL | CosNet |
|-------------------------------|--------|--------|----------|----------|--------|---------|
| FordB | 68.99 | 69.81 | 69.68 | 50.17 | 64.20 | **71.73** |
| Wine | 77.22 | 76.48 | 83.06 | 80.00 | 67.41 | **85.46** |
| ECG200 | 73.40 | 77.80 | 89.80 | 89.85 | 87.53 | **90.10** |
| ECG5000 | 91.98 | 91.14 | 94.11 | 93.50 | 92.75 | **93.70** |
| Herring | 57.73 | 56.64 | 65.23 | 58.13 | 59.52 | **65.39** |
**Q2: The experiments in the paper adopt time-series datasets with low complexity. Could the authors provide more results that demonstrate the performance of CosNet on tasks such as image encoding and decoding or the signal processing in real-world scenarios, since these datasets may also be suitable cases for complex-valued spectral kernel modeling.**
**Response:** On the one hand, we have included the automatic modulation (AM) classification task using the real-world signal dataset RML2016.10a [1], a classical complex-valued signal dataset. We conduct a comparative analysis between our proposed method and baseline approaches across varying signal-to-noise ratios (SNRs) ranges. On the other hand, we expand the application of CosNet to convolutional networks for image classification tasks on the FashionMNIST and CIFAR-10 datasets. All the results show the effectiveness of our CosNet. Detailed results are provided in the following table for a comprehensive overview and will be incorporated into the revised manuscript.
| Tasks | Datasets | \(DCN^1\) | \(DCN^2\) | CosNet |
|---------------------|--------------|----------|----------|----------|
| Image classification| FashionMNIST | 87.02 | 84.44 | **88.33**|
| | CIFAR-10 | 64.32 | 52.39 | **66.51**|
| Tasks | SNRs range | \(DCN^1\) | \(DCN^2\) | CosNet |
|--------------------|------------|----------|----------|----------|
| AM classification | 10-18 | 81.14 | 79.98 | **81.89**|
| | 0-8 | 79.27 | 77.45 | **79.70**|
[1] O'shea T J, West N. Radio machine learning dataset generation with gnu radio[C]//Proceedings of the GNU Radio Conference. 2016, 1(1).
**Q3: The paper presents experiments comparing the accuracy of CosNet to other methods with similar network size on the same dataset, highlighting the capacity gain from introducing the imaginary part. However, I believe that accuracy alone may not fully demonstrate the improvement of the model's representation capacity. Could the authors provide more evidence of CosNet's capacity gain from other aspects, such as compression and recovery of more complex datasets?**
**Response:** To further explore the representation of our CosNet, we conduct more complex task utilizing the FashionMNIST and CIFAR-10 datasets. Concretely, we first extract the implicit features through various models, and then we conduct the clustering task based on these extracted features. In this task, Normalized Mutual Information (NMI) and Rand Index (RI) are used as the assessment metrics. Our CosNet achieves **5.6\%** NMI improvement (86.04\% $\rightarrow $90.86\%), **1.19\%** RI improvement (97.15\% $\rightarrow$ 98.13\%) on FMNIST dataset, and **15.82\%** NMI improvement (57.14\% $\rightarrow$ 66.18\%), **2.5\%** RI improvement (87.78\% $\rightarrow$ 89.97\%) on CIFARI-10 dataset. The results show that our CosNet has a greater representation capbility than other complex-valued convolutional networks. Detailed results are provided in the following table. Further, we will include more compression and recovery tasks in the revised manuscript.
| Dataset | Metrics | \(DCN^1\) | \(DCN^2\) | CosNet |
|----------|---------|----------|----------|----------|
| FMNIST | NMI | 86.04\% | 81.31\% | **90.86\%**|
| | RI | 97.15\% | 95.97\% | **98.31\%**|
| CIFAR-10 | NMI | 57.14\% | 41.69\% | **66.18\%**|
| | RI | 87.78\% | 84.07\% | **89.97\%**|
**Q4: In Figure 2, the authors only present the visualization results of two other methods for predicting and displaying the original time series curve. Could the authors provide additional visualizations of the performance of other comparable methods for comparison? This would provide a more comprehensive and fair comparison of the proposed method against existing ones.**
**Response:** Figure 2 is designed to illustrate the effective capture of inherently complex-valued representations by the first layer of our CosNet, seamlessly feeding them into the subsequent complx-valued networks (*i.e.* the CSKE module). To validate this statement, we utilize DSKN and FT as the baseline methods, driven by two primary reasons. Firstly, Fourier transform is extensively utilized to map real-valued data into complex-valued representations, which serve as inputs for complex-valued networks. Second, DSKN encompasses most existing approaches, where the imaginary part is padded with zeros. While DSKN ignores the imaginary part that is learned from the real-valued data, the other methods directly consider the raw real-valued data and zeros as the real and imaginary parts. As a result, we select these two methods for comparison purposes. | Summary: The paper aims to extend the reach of kenel-based inference for time series by using a hilbert space over a complex field rather than working over the reals, by not discarding the phase component of the implied spectral representation of the kernels, which apparently is a common strategy. This method makes it feasible to define a broader class of kernels than the obvious (.e.g stationary)
Strengths: The paper seems to propose a novel way of characterising flexible covariance kernels such that they are PSD, and some neural networks that exploit this
Weaknesses: This paper is hard to read. The main claims are difficult to extract and thus their correctness is hard to verify.
I could be persuaded that I have underestimated the soundness of the results --- they could in fact be amazing --- but I have already blown my limited time budget trying to understand what is going on, because basic framing information is missing.
I presume that some of this could be deduced by inspecting the references introduced in section 2, but I don't have time for that in the setting of urgent conf reviews. the paper needs to be self-contained, even if it we defer rigorous proof to the appendices.
My certainty rating reflects my relatively high confidence that this paper needs a rewrite. I do not have a high confidence that the idea itself is flawed, and would welcome clarifications that showed me what the idea actually is, because it might be great. I currently do no understand it, because the presentation is confusing enough that it will take more than the time I have to "reverse engineer" it.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: There are a lot of words about the usefulness of the complex-values covariance kernel which I think I can interpret but I do not actually know for sure, because the actual inference problem is not clearly set up. What is it? What is the baseline?
I think my confusion starts in l122. Everything up to this point was fine, but now we have deployed the machinery of a complex-valued spectrum in kernel definitions, without actually explaining what is different. there should be certain things that are better for a complex kernel from the Yaglom theorem. For one, a kernel so defined does needs to be stationary, as opposed to a Bochner-theorem style kernel, which I assume is the main point, and very cool and indeed the authors mention this right in the abstract. So I like this! I've looked at Yaglom's theorem before and though tit would be great to make it tractable to use but I didn't see a way. Indeed this should be a very general kernel, and should generalise other kernel classes too (e.g. dot-product kernels). So the idea sounds promising.
Even here though, I'm a little confused; OK, so we are using this kernel not as, say, a covariance kernel of a Gaussian process, but rather to directly define a similarity between data points for optimal interpolation, as far as I can tell, which is fine, but can you say more about that actual implied network structure? Do we keep around the training data so we can measure the kernel-similarity to other training points, or are we happy to use it as "just another" nonlinear NN layer. In which case, what does this layer do that an MLP does *not* do?
in eq (10) we learn that the kernel is characterised by a finite list $\Omega$ of frequencies, right? Is it correct that this means that we are restricting our kernel spectral "density" $\left(\boldsymbol{\omega}, \boldsymbol{\omega}^{\prime}\right)$ to be not a density as such but but rather to be a collection of dirac deltas in the spectral space? I suspect we need to say so, in that case.
l147/eq(8) the $M$ suddenly appeared and it looks important but is not explored. This is the number of MC samples we take to actually approximate the spectral kernel integral. So... when do we evaluate this integral? Is it inside the training loop? How do we choose $M$? is it robust against different choices of $M$? Should we not see $M$ pop up in evaluating the computational cost of this method.
Can we put a simple function map notation description of *every operator and function*, e.g. $\Phi_\ell: \mathbb{C}^{d_{\ell}}\to\mathbb{C}^{d_{\ell+1}}$? Generally, for most functions in this paper I'm confused what the are mapping from and to. I have so many questions here that I cannot list them all. Here are some examples about function domains and ranges:
1. Are we permitting the values of the kernel outputs in the intermediate layers to be complex?
2. which values are permitted to be vectors and which scalars? I gather the inputs to the network are vectors (?) but the test examples seem to be all time series with a scalar index (?) In section 4,1a the experiments seem to be about estimating complex valued time series, and in section 4.1b it is classifying stuff based on the implied feature mapping. Is the model more general than this?
If the authors wish to remain very general, fine, but if so, could we have a running example that makes it clear?
Since we also know that NNs exist which do not parameterise their output in terms of parameters of kernels, but directly in terms of _weights_, could we say anything about the relative representation power compared to that?
There is Theorem 1, which gives us a statistical learning theory result in terms of covering numbers, which is possibly supposed to help us. Perhaps the problem here is my own ignorance, but what is the equivalent results for the baseline that the authors hope to surpass? Can you quote an equivalent theorem for a baseline? Should I know one? Or is the intent here to show us how to trade off between allocating weights to _including more frequencies in a given kernel layer_ versus _adding more layers_? Can you spell out the implications of this theorem in terms of "wide versus deep complex kernel networks" and also "whether complex kernel networks are better than MLPs"?
Related: Why do we need to stack layers at all, rather than simply learning a big kernel?
4.1: There is a neural network here. I support the authors not wasting space with excessive details about Adagrad learning rates etc, but I need just a little more information to know what is happening. What is the training procedure? If I am used to ERM for training an NN, do I need to worry about some alternative methods for these exotic kernel networks, or is it the same? If it is the same, why am I thinking about the kernels directly rather than just learning an MLP? remember, since the network has been non-specific throughout the paper, this is my example to learn what actual inputs and outputs this network can predict upon.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: I don't know. I have a hard time deducing exactly the domain of applicability of the paper from the presentation here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments! We will provide a point-to point response in the rebuttal.
**1 Notations** In this paper, the matrices, vectors and scalars are denoted
by bold capital letters (*e.g.* $\pmb{X}$), bold lower-case letters (*e.g.* $\pmb{x}$) and lower-case letters (*e.g.* $x$), respectively. In addition, for each equation or function, we will includ its domain and range in the revised manuscript.
**2 Network architecture and experiment setting:** In the inference procedure, as exemplified by time series classification task, the input is a time series (*i.e.* vector) with a scalar at each time point. The output is the implied feature (*i.e.* vector), which is used to conduct the classification task. Concretely, the operation in the first layer is defined as $\Phi:\mathbb{R}^{d^x}\rightarrow \mathbb{C}^{d^x}$, where $d^x$ denotes the dimension of the data. Via $\Phi$ in the first layer, the data result in complex-valued representations, which are fed into the CSKE module starting from the second layer. The operation of $l^{th}$ layer is defined as $\Psi^l: \mathbb{C}^{d^l}\rightarrow \mathbb{C}^{d^{l+1}}$, where $d^l$ denotes the number of hidden complex-valued neuron. After CSKE module, we obtain the implied complex-valued feathers. Moreover, these implied complex-valued features are condensed into vector form by the operation $\mathbb{C}^{d^L}\rightarrow \mathbb{R}^{2d^L}$, which concatenate real and imaginary parts, to conduct the classification task.
In the experiment, the learning rate, epoch and layer number are set as 0.01, 500 and 5, respectively. The batch\_size is equal to number of samples and the width of networks in each dataset depends on the length of time series. The initialized weight matrices are sampled from the normal distribution $\mathcal{N}(0, 0.01)$. The detalied information will be shown in the updated paper.
**3 Kernel or nonlinear NN layer:** Note that, in CosNet, the kernel is defined using explicit kernel mapping rather than the covariance matrix. Specifically, Yaglom's theorem establishes a connection between a kernel and its spectral density. Basen on Monte Carlo random sampling, we can approximate the kernel with a explicit kernel mapping within eq(8). Our CosNet is constructed via stacking the explicit kernel mapping with multiple layers. Notably, in eq(8), we do not need to caculate the integral in the training process. Here, $M$ is the number of MC samples, also referred to as the count of frequencies, and it equal to the numbers of feature in this paper. Moreover, in order to explore the influence of $M$ on the result, we conduct the experiment based on our CosNet, and $M$ is set in the range $[\frac{d^x}{4}, \frac{d^x}{2}, d^x, 2d^x, 4d^x]$. The result shows that while the value of $M$ does exert some influence on the results, the effect is relatively minor. Please see TABLE I in the added pdf file for more detailed results, and it will be shown in the revised paper.
**4 Complex-valued kernel network versus MLPs:** For the framework generality, our CosNet demonstrates the capability to analyze not only complex-valued data but also real-valued data that inherently include complex-valued information. In contrast, MLPs confined to analyzing solely real-valued data. Theoretically, our CosNet has greater representation ability compared to MLPs. Concretely, we bound the covering number of different layers in CosNet. Covering numbers also serves as an indicator of models’ representation ability, where the larger the covering number the greater the representation ability, but the more difficult it is to get the optimal solution. In Theorem 1, the covering number of each layer is bounded by $(2d^ld^{l-1})^k$ and $(4d^ld^{l-1})^k$ in MLPs and real-valued non-stationary spectral kernel networks, respectively. This provides insight into the comparison: 1) Compared with MLPs, our CosNet has greater representation ability with the bound of $(4d^ld^{l-1})^k$ covering numbers in the first layer; 2) Compared with real-valued spectral kernel networks, our CosNet makes it easier to find the optimal solution with the bound of $(2d^ld^{l-1})^k$ covering numbers from second layer. Therefore, our CosNet combines the advantages of kernel networks and MLPs, which has stronger characterization ability and is easier to find the optimal solution. We will show more analysis in the revised paper.
**5 Why do we need to stack layers at all, rather than simply learning a big kernel?** It is important to note that traditional kernel methods are confined to learning a single layer of nonlinear features, potentially constraining their representational capacity. Inspired by neural networks which learn multi-layer hierarchical representations, deep kernel (stacked kernel) is developed to learn hierarchy within Reproducing Kernel Hilbert Space, yielding a cascade of nonlinear features. Therefore, stacked kernel combines the advantags of kernel and neural networks. For a more comprehensive understanding, we refer interested readers to the detailed explanations provided in the reference titled 'Stacked Kernel Network'. Moreover, we perform a comparative analysis using ECG200 dataset between stacked kernel networks and their corresponding big kernels with 1024 Monte Carlo samples. The result show that the stacked kernel performs better than a big kernel with single layer. Please see TABLE II in the added pdf file for more detailed results, and it will be shown in the revised manuscript.
**6 in eq (10) we learn that the kernel is characterised by a finite list $\pmb{\Omega}$
of frequencies, right? Is it correct that this means that we are restricting our kernel spectral "density" $(\pmb{\omega}, \pmb{\omega}')$
to be not a density as such but but rather to be a collection of dirac deltas in the spectral space? I suspect we need to say so, in that case.** Yes, you are right! We will statement this point in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you, this is very helpful. Your revised explanation has substantially improved my understanding of the paper and my estimation of the significance of your results. I will revise my rating accordingly.
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: We appreciate your reply and are delighted that our explanation can help you understand our approach. Your comments are helpful for improving our work, we will include more details in the revised manuscript. If there are any other questions, we would discuss them in a timely manner. | Summary: This paper mainly focuses on the issue that spectral kernel-based methods often eliminate the imaginary part when analyzing the characteristics of time-sequential data. This limits the representation capability of the spectral kernel. To address this issue, the authors propose a complex-valued spectral kernel network to take both the real and imaginary parts into account. The proposal mainly consists of two parts - the SKMG module recovers the complex-valued representation for the real-valued data and the CSKE module combines the complex-valued spectral kernels and neural networks. Theoretical and empirical results show the proposed method achieves state-of-the-art performance.
Strengths: 1. The proposed method is well-motivated. Involving the imaginary part is critical for preserving the amplitude and phase information for data and improving the representational capability of spectral kernel networks.
2. The proposed approach is sound. The complex-valued representation for the real-valued data is recovered by the SKMG module and complex-valued spectral kernels are combined with neural networks via the CSKE module. The two parts are well integrated.
3. Theoretical analysis and experimental evaluation are provided to show the state-of-the-art performance of the proposal.
Weaknesses: 1. In Section 3.3, the authors define the complex-valued weight matrix by Equation 11. But is unclear why this design ensures that the sub-network containing the first layer to arbitrary l-th layer is seen as a spectral kernel.
2. The experimental results appear to be dependent on the choices of hyperparameters. Performance of CosNet with different learning rates, initializations and layer numbers should be provided.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: In Section 3.3, the authors define the complex-valued weight matrix by Equation 11. But is unclear why this design ensures that the sub-network containing the first layer to arbitrary l-th layer is seen as a spectral kernel.**
**Response:**
Thanks for this valuable comment. For our CosNet, the first layer (*i.e.*} SKMG module) is constructed from the Yaglom’s theorem, naturally resulting in a spectral kernel. We ensure the subnetwork from the first layer to the $l$-th layer is a spectral kernel by elaborating the complex-valued weight matrix and the feed-forward procedure which is explained in Equation (12). Upon expanding Equation (12), it becomes apparent that the sub-network containing the first layer to arbitrary $l$-th layer is seen as a spectral kernel with the defined complex-valued weight. To enhance clarity, we present an illustrative example in $\mathbb{C}^2$ to show more details in the below, and a more detailed explanation will be added to the revised manuscript.
**Example:**
Let
$$
\pmb{z} = \begin{bmatrix}
\cos(u_{11}) + \cos(u_{11}') \\\\
\cos(u_{21}) + \cos(u_{21}')
\end{bmatrix} + i \begin{bmatrix}
\sin(v_{11} + \sin(v_{11}') \\\\
\sin(v_{21}) + \sin(v_{21}')
\end{bmatrix} \in \mathbb{C}^2
$$
be the output of first layer. The complex-valued weight matrix is defined as $\pmb{W}=\cos(\pmb{A}) + i\sin(\pmb{A})$, where $\pmb{A}=[a_{11}, a_{12}] \in \mathbb{R}^{1 \times 2}$ is a real-valued matrix. The complex-valued mapping is defined as:
$$
\Psi(\pmb{z}) = \pmb{W}*\pmb{z}
=(\cos(\pmb{A})+i\sin(\pmb{A})) * (\begin{bmatrix}
\cos(u_{11}) + \cos(u_{11}') \\\\
\cos(u_{21}) + \cos(u_{21'})
\end{bmatrix} + i\begin{bmatrix}
\sin(v_{11}) + \sin(v_{11}') \\\\
\sin(v_{21}) + \sin(v_{21}'))
\end{bmatrix})
$$
We rewrite the complex-valued mapping as the following matrix notation:
$$
\Psi(\pmb{z})
=\begin{bmatrix}
\cos(\pmb{A}) & -\sin(\pmb{A}) \\\\
\sin(\pmb{A}) & \cos(\pmb{A})
\end{bmatrix} * \begin{bmatrix}
\Re(\pmb{z}) \\\\
\Im(\pmb{z})
\end{bmatrix}
=\begin{bmatrix}
\cos(a_{11}) & \sin(a_{11}) \\\\
\cos(a_{12}) & \sin(a_{12}) \\\\
-\sin(a_{11}) & \cos(a_{11}) \\\\
-\sin(a_{12}) & \cos(a_{12})
\end{bmatrix}^\top *
\begin{bmatrix}
\cos(u_{11}) + \cos(u_{11}') \\\\
\cos(u_{21}) + \cos(u_{21}') \\\\
\sin(u_{11}) + \sin(u_{11}') \\\\
\sin(u_{21}) + sin(u_{21}')
\end{bmatrix}\\
= \begin{bmatrix}
\Psi_{a_{11},u_{11},u_{11}'} + \Psi_{a_{12},u_{21},u_{21}'} \\\\
\Psi_{a_{11},v_{11},v_{11}'}' +
\Psi_{a_{12},v_{21},v_{21}'}'
\end{bmatrix}
$$
where $\Psi_{a_{11},u_{11},u_{11}'}=\cos(a_{11}+u_{11})+\cos(a_{11}+u_{11}')$, $\Psi_{a_{12},u_{21},u_{21}'}=\cos(a_{12}+u_{21})+\cos(a_{12}+u_{21}')$, $\Psi_{a_{11},v_{11},v_{11}'}'= \sin(a_{11}+v_{11})+\sin(a_{11}+v_{11}')$, and $\Psi_{a_{12},v_{21},v_{21}'}'= \sin(a_{12}+v_{21})+\sin(a_{12}+v_{21}')$.
We can observe that $\Psi_{a_{11},u_{11},u_{11}'}$, $\Psi_{a_{12},u_{21},u_{21}'}$, $\Psi_{a_{11},v_{11},v_{11}'}'$, and $\Psi_{a_{12},v_{21},v_{21}'}'$ can be seen as two separate spectral kernel. Hence, the sub-network containing the first layer to arbitrary $l$-th layer also is a spectral kernel.
**Q2: The experimental results appear to be dependent on the choices of hyperparameters. Performance of CosNet with different learning rates, initializations and layer numbers should be provided.**
**Response:**
Thanks for this great comment. To ensure the reproducibility of our experimental findings, we unify the hyper-parameters, and the partial updated results (under the same learning rate (0.01), initialization (p = 0.01), and layer numbers (5)) are reported in the following Table. All the related reults will be reported in the revised manuscript.
| Dataset | SRFF | DSKN | \(DCN^1\) | \(DCN^2\) | ASKL | CosNet |
|-----------------------------|--------|--------|----------|----------|--------|--------|
| FordB | 68.99 | 69.81 | 69.68 | 50.17 | 64.20 | **71.73** |
| Wine | 77.22 | 76.48 | 83.06 | 80.00 | 67.41 | **85.46** |
| ECG200 | 73.40 | 77.80 | 89.80 | 89.85 | 87.53 | **90.10** |
| ECG5000 | 91.98 | 91.14 | 94.11 | 93.50 | 92.75 | **93.70** |
| Herring | 57.73 | 56.64 | 65.23 | 58.13 | 59.52 | **65.39** |
Furthermore, to evaluate the generalization of our CosNet, we explore the influence of varying hyper-parameters on the result based on ECG200 dataset. The result shows the superior performance and stability of our CosNet. Please see the results in the following table for more details, and we will include these results and more analysis in the revised manuscript.
| lr | init (p) | SRFF | DSKN | \(DCN^1\) | \(DCN^2\) | ASKL | CosNet |
|--------|----------|--------|--------|----------|----------|--------|---------|
| 0.1 | 1 | 64 | 60.85 | **90.30**| 83.15 | 61.00 | 86.05 |
| 0.1 | 0.1 | 85.55 | 61.50 | **90.30**| 83.15 | 80.20 | 85.85 |
| 0.1 | 0.01 | 78.25 | 62.80 | **90.30**| 83.15 | 74.10 | 85.05 |
| 0.01 | 1 | 52.60 | 64.00 | 88.10 | 84.05 | 75.00 | **90.05**|
| 0.01 | 0.1 | 85.40 | 67.95 | 88.10 | 84.05 | 89.75 | **91.30**|
| 0.01 | 0.01 | 73.40 | 77.80 | 88.10 | 84.05 | 87.53 | **90.10**|
| 0.001 | 1 | 50.85 | 62.75 | 80.35 | 79.25 | 72.55 | **89.25**|
| 0.001 | 0.1 | 83.50 | 73.00 | 80.35 | 79.25 | **90.90**| 90.25 |
| 0.001 | 0.01 | 64.00 | 84.40 | 80.35 | 79.25 | 88.90 | **90.45**|
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thank you for the detailed response. It has well improved my understanding of the paper. I am keeping my positive score unchanged for now and no more questions at this time.
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: We appreciate your recognition and are delighted that our explanation can help you understand our work. We will include more details in the revised manuscript based on your comments. | Rebuttal 1:
Rebuttal: We include additional experimental result in the pdf.
Pdf: /pdf/4fccb1c5cc6c34bac6a39d22f62d86beedd9e9a8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DiffUTE: Universal Text Editing Diffusion Model | Accept (poster) | Summary: This paper describes an application of diffusion models (Sohl- Dickstein et al., 2015; Ho et al., 2020) to text editing. Methodologically, this work differs from previous text diffusion (Li et al., 2022) by leveraging insights on glyph encoder and OCR detector. Empirically, this work advances the state of the art for text editing by scaling these methods to larger datasets. The paper also proposes to use self-supervised training to train the diffusion model and further explore diffusion guidance.
Strengths: 1. This is the work on diffusion LMs that shows results on a text editing baseline and seems to have positive results in terms of the metrics used.
2. The writing is clear and the motivations seem sound.
Weaknesses: 1. The main weakness is the novelty. The core idea of this paper, i.e. latent diffusion, has been demonstrated to be successful in many generation tasks. Thus it is not surprising that it works on scene text editing. Most of the techniques used in the paper have been proposed perviously.
2. The author did not provide any details regarding the position control module, thus the ablation study of this part is not convincing.
3. The authors did not evaluate a variety of evaluation measures that prior work has done such as SSIM, MSE, PSNR, and many more. These metrics should be computed to a get better idea of the quality and diversity of the output. Please see these this paper for the description of these metrics: “ Krishnan P, Kovvuri R, Pang G, et al. Textstylebrush: transfer of text aesthetics from a single example[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.”.
4. There are missing comparisions such as Krishnan et al 2023's TextStyleBrush[1] and Ji’s 2023’s DiffSTE [2].
5. The model rely on a pretrained OCR encoder which just seems like an arbitrary choice. An ablation should be provided with different pretrained encoders to understand the impact of this choice.
[1] Krishnan P, Kovvuri R, Pang G, et al. Textstylebrush: transfer of text aesthetics from a single example[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
[2] Ji, Jiabao, et al. "Improving Diffusion Models for Scene Text Editing with Dual Encoders." arXiv preprint arXiv:2304.05568 (2023).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: The author mentioned that "in the first three stages of training, we randomly crop images of sizes 𝑆/8, 𝑆/4 and 𝑆/2 and resize them to 𝑆 for training", however, which are the three stages and where is the fourth one? Moreover, an ablation that would be of interest is to train with different resolutions at different stages.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Beyond the weaknesses I listed, the authors were good at addressing several limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and detailed review. We would like to response as below to adress your remaining concerns.
> [W1] Novelty Concern. The core idea of this paper, i.e. latent diffusion, has been demonstrated to be successful in many generation tasks. Thus it is not surprising that it works on scene text editing. Most of the techniques used in the paper have been proposed perviously.
Diffusion has good performance in other tasks, but its text scene performance is unverified, and zero-shot and direct fine-tuning performance is poor ("Achilles' heel"). Achieving good results in text editing is important. DiffSTE uses instruction tuning but only supports English text editing and lacks scalability. Our multilingual text editing model prepares only corresponding images for fine-tuning new languages, due to glyph image support.
> [W2] The author did not provide any details regarding the position control module, thus the ablation study of this part is not convincing.
We need to clarify that we have already elaborated on the position control in Line 109-115. It is not a module, but a mask of the area to be edited. Therefore, in the ablation experiment, we compared the results without adding the mask at the input end to verify the effectiveness of the position control.
> [W3] The authors did not evaluate a variety of evaluation measures that prior work has done such as SSIM, MSE, PSNR, and many more. These metrics should be computed to a get better idea of the quality and diversity of the output. Please see these this paper for the description of these metrics.
We need to clarify that "diversity" is unnecessary in the scene text editing task, and the closer the generated text style is to the original text style, the better. Therefore, we conducted a user study to verify the superiority of our model in generating text styles. Specifically, we randomly select 100 images from our Web dataset. Given each image, we can obtain 4 edited results including 3 baselines and our method. We invited 50 users to identify the edited text style in each group that they felt was most similar to the original image. Finally 20,000 comparison results are collected, followed by using the Bradley-Terry (B-T) model [1] to calculate an overall ranking of all methods. As presented in the following Table, our DiffUTE achieves the highest B-T score.
| Method | B-T Score |
| :---: | :---: |
| SRNet | 0.1140 |
| SD2-FT | 0.1545 |
| DiffSTE | 0.3378 |
| DiffUTE | 0.3937 |
In addition, we provide some other metrics for comparison here as well. Since Textstylebrush does not provide open source code, we do not compare with it. We compared MSE, PSNR, and SSIM on the TextOCR validation set. It should be noted that the task of natural image editing is to make the text style of the edited image similar to that of the original image and the overall comparison is harmonious. Therefore, we calculated the metrics for the entire edited image. As shown in the table below, our method achieved the best results in all metrics.
| Method | MSE | PSNR | SSIM |FID |
| :---: | :---: | :---: | :---: | :---: |
| SRNet | 0.0352 | 17.62| 0.6232 | 40.88 |
| SD2-FT | 0.0132| 20.84 | 0.7472 | 32.52 |
| DiffSTE | 0.0114 | 21.94 | 0.7895 | 29.84 |
| DiffUTE | **0.0094** | **23.72** | **0.8323** |**28.22**|
> [W4] There are missing comparisions such as Krishnan et al 2023's TextStyleBrushand Ji’s 2023’s DiffSTE.
We compared our model with DiffSTE [1] on the validation set, as TextStyleBrush was not available in open-source code. The available reproduction code (https://github.com/grenlayk/text-deep-fake) differed from the original paper. As shown in the table, our method outperforms DiffSTE on all datasets, possibly due to glyph-based control conditions providing more spatial information. Also, DiffSTE only supports English text editing and struggles with more difficult Chinese text editing (Web dataset).
| Model | Avg.-OCR | Avg.-Cor |Web-OCR | Web-Cor | ArT-OCR | ArT-Cor | TextOCR-OCR | TextOCR-Cor | ICDAR13-OCR | ICDAR13-Cor |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| DiffSTE | 74.30 | 75 | 48.55 | 50 | 82.72 | 84 | 84.85 | 85 | 81.48 | 81 |
| DiffUTE | **85.41 (+11.11)** | **85.5 (+10.5)**| **84.83 (+36.28)** | **85 (+35)** | **85.98 (+3.26)** | **87 (+3)** | **87.32 (+2.47)** | **88 (+3)** | **83.49 (+2.01)** | **82 (+1)** |
> [W5] The model rely on a pretrained OCR encoder which just seems like an arbitrary choice. An ablation should be provided with different pretrained encoders to understand the impact of this choice.
Due to high training costs, we conducted a feasibility analysis and chose a multilingual OCR encoder, trocr, for our OCR encoder. It can support 100+ languages with image data collection and model training. Unlike DiffSTE, our model only requires OCR models to fine-tune text in different languages, which is convenient for extension.
> [Q1] The author mentioned that "in the first three stages of training, we randomly crop images of sizes S/8, S/4 and S/2 and resize them to S for training", however, which are the three stages and where is the fourth one? Moreover, an ablation that would be of interest is to train with different resolutions at different stages.
These trainings are specifically for VAE. As the VAE of SD is designed for natural images, its ability to restore text is weak and requires targeted fine-tuning. Our training is divided into 20 epochs, using 512-size input images and progressively increasing crop sizes (64/128/256/512) for difficulty. Starting with very large images can make VAE training difficult (with noise-like dots). Thanks for your suggestion; we'll add ablation experiments later due to high training costs.
References
[1] A comparative study for single image blind deblurring. In CVPR, 2016.
[2] Improving Diffusion Models for Scene Text Editing with Dual Encoders. arXiv preprint arXiv:2304.05568. | Summary: In this paper, the authors present DiffUTE, a universal self-supervised text editing diffusion model for language-guided image editing. They address the limitations of existing diffusion models by focusing on rendering accurate text and text style during image generation. DiffUTE incorporates modifications to the network structure, allowing it to handle multilingual character drawing using glyph and position information. Furthermore, a self-supervised learning framework leverages a large amount of web data to enhance the model's representation ability. The experimental results showcase the impressive performance of DiffUTE, demonstrating its ability to achieve high-fidelity and controllable editing on diverse real-world images. Overall, this paper presents a significant advancement in language-guided image editing and offers a promising approach for rendering realistic and customizable text in generated images.
Strengths: - The problem addressed in this paper is a realistic problem that current diffusion models struggle to handle effectively.
- The incorporation of LLM into the inference process is a compelling and intriguing approach.
Weaknesses: - The paper claims significantly better results than other baselines in Table 1. However, it would be helpful to clarify if there are other baselines that have not been adequately considered.
- A simple baseline is missing. Have the authors considered directly replacing the "source text" with the "target text" and calculating the FID (Fréchet Inception Distance)?
- In Table 1, the results for SD1-FT and SD2-FT appear to be poor. It would be valuable to explain the main differences between your method and these baselines.
- There is limited mention of other methods that fine-tune the encoder-decoder. How important is this step? Additionally, could you provide details on the difference in parameter numbers shown in Table 1?
- The discussion regarding self-guidance is absent, despite the proposal of a self-supervised approach for achieving text editing in diffusion. The related papers are:
- Self-Guided Diffusion Models
- Why Are Conditional Generative Models Better Than Unconditional Ones?
- Visual Chain-of-Thought Diffusion Models
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As Above
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and detailed review. We would like to response as below to adress your remaining concerns.
> [W1] The paper claims significantly better results than other baselines in Table 1. However, it would be helpful to clarify if there are other baselines that have not been adequately considered.
Thank you for your professional comments. According to the opinions of other reviewers, we have added a comparison with the latest DiffSTE[1]. As shown in the table below, our method performs better than DiffSTE on all datasets, which may be due to the use of instructions to control image editing in DiffSTE. Obviously, glyph-based control conditions can provide more spatial information. In addition, DiffSTE only supports English text editing and does not perform well on more difficult Chinese text editing.
| Model | Avg.-OCR | Avg.-Cor |Web-OCR | Web-Cor | ArT-OCR | ArT-Cor | TextOCR-OCR | TextOCR-Cor | ICDAR13-OCR | ICDAR13-Cor |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| DiffSTE | 74.30 | 75 | 48.55 | 50 | 82.72 | 84 | 84.85 | 85 | 81.48 | 81 |
| DiffUTE | **85.41 (+11.11)** | **85.5 (+10.5)**| **84.83 (+36.28)** | **85 (+35)** | **85.98 (+3.26)** | **87 (+3)** | **87.32 (+2.47)** | **88 (+3)** | **83.49 (+2.01)** | **82 (+1)** |
> [W2] A simple baseline is missing. Have the authors considered directly replacing the "source text" with the "target text" and calculating the FID?
Thank you for your suggestion. We added a simple method as a baseline, as shown in the table below, and compared different methods in terms of FID. Specifically, our baseline is divided into two steps. In the first step, we use traditional inpainting algorithms[2] to restore the areas that need to be tampered with, and then directly write the desired text in those areas.
| Method | MSE | PSNR | SSIM |FID |
| :---: | :---: | :---: | :---: | :---: |
| Baseline | 0.0894 | 11.04 | 0.3523 | 90.82 |
| SRNet | 0.0352 | 17.62| 0.6232 | 40.88 |
| SD2-FT | 0.0132| 20.84 | 0.7472 | 32.52 |
| DiffSTE | 0.0114 | 21.94 | 0.7895 | 29.84 |
| DiffUTE | **0.0094** | **23.72** | **0.8323** |**28.22**|
> [W3] In Table 1, the results for SD1-FT and SD2-FT appear to be poor. It would be valuable to explain the main differences between your method and these baselines.
The reason why SD1-FT and SD2-FT perform poorly is because they perform editing through instruction. It is difficult for instruction to provide information about the shape of the text, and there are also errors in understanding the specific text to be edited from the instruction. Our method uses glyph image to provide information about the shape of the text, and adds position control to strengthen the generation target, which provides richer control information than SD and generates high-quality text. In addition, since we use glyph image as the condition, when facing the editing task of a new language, only the image of the text in the new language needs to be provided for fine-tuning our model. In contrast, SD needs to prepare corresponding instructions for fine-tuning.
> [W4] There is limited mention of other methods that fine-tune the encoder-decoder. How important is this step? Additionally, could you provide details on the difference in parameter numbers shown in Table 1?
In our model, there are two fine-tuning processes. In fine-tuning the VAE, there are two options: training directly using the input image size of the model, and training using the progressive training strategy. We compare these two methods in Figure 5 and Figure 6. In fine-tuning the entire SD, there are no other options for fine-tuning methods. I have listed the detailed parameters of the models in Table 1 below. Unfortunately, many models did not provide parameter information, which may be because the editing effect in this field is currently not satisfactory enough. When the effect becomes good enough, people will consider lightweight.
| Method | Params |
| :---: | :---: |
| Pix2Pix | Unknown|
| SRNet | Unknown |
| MOSTEL | Unknow |
| SD| 1070 MB |
| DiffSTE | Unknown |
| DiffUTE | 1500 MB |
> [W5] The discussion regarding self-guidance is absent, despite the proposal of a self-supervised approach for achieving text editing in diffusion. The related papers are: Self-Guided Diffusion Models; Why Are Conditional Generative Models Better Than Unconditional Ones?;Visual Chain-of-Thought Diffusion Models.
Thank you for your professional comments. We will add the following discussion about other self-supervised diffusion models in the revised paper.
Hu et al. proposed self-guided diffusion models, which use the flexibility of self-supervised signals to design the framework of self-guided diffusion models to eliminate the need for annotations. By leveraging a feature extraction function and a selfannotation function, they provides guidance signals at various image granularities: from the level of holistic images to object boxes and even segmentation masks. Bao et al. train a conditional diffusion model by taking the cluster indices as conditions. And Harvey et al. propose to close the gap between conditional and unconditional models using a two-stage sampling procedure. The above methods mostly train by obtaining pseudo-labels of the image, while DiffUTE is essentially a task similar to MAE. It is hoped that in this process, the model can learn the mapping relationship between glyph, surrounding background, and image representation.
References
[1] Improving Diffusion Models for Scene Text Editing with Dual Encoders. arXiv preprint arXiv:2304.05568.
[2] Navier-stokes, fluid dynamics, and image and video inpainting. In CVPR, 2001.
[3] Self-Guided Diffusion Models. arXiv preprint arXiv:2210.06462.
[4] Why Are Conditional Generative Models Better Than Unconditional Ones? arXiv preprint arXiv:2212.00362.
[5] Visual Chain-of-Thought Diffusion Models. arXiv preprint arXiv:2303.16187. | Summary: The authors propose a method of fine-tuning Stable Diffusion to modify words in images, while maintaining the original font style and the background region.
Specifically, they first fine-tune the VAE with text images from several datasets.
Then, utilizing an off-the-shelf OCR detector, they randomly mask out one text box, and tune the denoising network asking it to fill the region with original text that is given as the condition in cross-attention layers.
With this simple and intuitive method, they achieve improved performance in various evaluations.
Strengths: - the authors tackle a meaningful task
- the proposed method is simple and easy to reproduce
- the proposed method demonstrates improved performance on various evaluation metrics
Weaknesses: - would only work for texts that can be detected by off-the-shelf OCR detectors
- there are missing details on some parts of the method
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - not sure whether this method is self-supervised or not, since it requires the utilization of an OCR detector in training
- could you elaborate more on a glyph image? for example, what is the output format of the glyph encoder?
- could you compute the accuracy of ChatGLM’s predictions? and how much cost does it take to fine-tune it?
- it seems that the figure 3 is redundant. maybe you can incorporate it into the figure 2?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: please refer to the Weaknesses section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and detailed review. We are encouraged that the reviewer find that our DiffUTE 'is simple and easy to reproduce' and 'demonstrates improved performance on various evaluation metrics'. We would like to response as below to adress your remaining concerns.
> [W1] would only work for texts that can be detected by off-the-shelf OCR detectors
As DiffUTE relies on the text boxes extracted by OCR, the generated results will not be very good when the boxes are inaccurate. Here we provide an analysis of some failed examples in this global response PDF (Figure S1-S3). However, when the OCR box can cover the text well, DiffUTE can usually perform editing work well. Our next step is to design a dynamic mask to compensate for the problem of inaccurate mask regions. We also tried an OCR-free method, fine-tuned SD1 and SD2 by instruction tuning. However, the experimental results (Table 1 and Figure 4) show that controlling text editing through instructions cannot achieve good results. This may be because natural language can only provide the semantics of the text to be modified, while glyphs can provide more detailed spatial structural information.
> [W2] there are missing details on some parts of the method.
Considering the limited space, if there are any unclear parts, we will provide detailed explanations in the appendix. In addition, we have provided a detailed description of the experiment's details and our training and inference code in the appendix.
> [Q1] Not sure whether this method is self-supervised or not, since it requires the utilization of an OCR detector in training.
Nowadays, OCR technology is quite mature and can be used for preprocessing in many downstream tasks. In fact, we only provide the position information of the text to the model, without any additional artificially defined information or labels. At the same time, OCR information is a basic requirement for scene text editing. Therefore, compared with artificially constructing an instruct-tuning dataset, our model can be regarded as a self-supervised task.
> [Q2] Could you elaborate more on a glyph image? for example, what is the output format of the glyph encoder?
The glyph image is created by using a universal font to write the text to be edited on a white background image. We have submitted the complete code in the support material. For easy understanding, here is a snippet of the code used to generate the glyph image. And we used trocr as the glyph encoder. Specifically, trocr consists of an encoder and a decoder. The encoder encodes the text in the image, and the decoder interprets the encoding to obtain the recognition result. In our model, we only use the encoder of trocr to extract the encoding features. Therefore, the output of the glyph encoder is a deep feature vector.
```python
def draw_text(im_shape, text):
# text size
font_size = 40
# text font, you can download it from the internet
font_file = 'arialuni.ttf'
# create a pure white background
img = Image.new('RGB', ((len_text+2)*font_size, 60), color='white')
# define the font object
font = ImageFont.truetype(font_file, font_size)
# define the text and position
pos = (40, 10)
# write the text on background
draw = ImageDraw.Draw(img)
draw.text(pos, text, font=font, fill='black')
img = np.array(img)
return img
```
> [Q3] Could you compute the accuracy of ChatGLM’s predictions? and how much cost does it take to fine-tune it?
When ChatGLM is not fine-tuned, it is difficult to return correct results and understand human commands due to the lack of learned structured data such as OCR results. We tested it on 100 commands, and the accuracy before fine-tuning was 5%, while after fine-tuning, it reached 98%. We used an A100 for fine-tuning.
> [Q4] it seems that the figure 3 is redundant. maybe you can incorporate it into the figure 2?
Thank you for your professional comments. We will update our images in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions. It did resolve some of my concerns, and I would like to keep my original score. | Summary: The paper proposes DiffUTE for general text editing.
DiffUTE utilizes Stable Diffusion model with several specific model designs, progressive training strategy, positional and glyph guidance, and a self-supervised training framework.
Equipped with these designs, DiffUTE achieves remarkable results compared to other baselines on several public datasets.
Moreover, the authors also provide a chat-based interface which enables an easier manipulation for the users.
Strengths: Originality, motivation and significance:
- The paper shades an interesting perspective to edit text using pre-trained Stable Diffusion model. Two motivations raised in Line 29 and Line 32 are intuitive.
- The interaction module is interesting and easy to use.
Technical approach:
- Finetuning VAE with a progressive training strategy (PTT) with different image sizes in different stages is a good choice to overcome blurry outputs. As shown in Table 2, with PTT, DiffUTE has a noticeable gain.
- The insight into generating fine-grained texts makes sense. With positional and glyph guidance, DiffUTE generates texts with natural shapes.
- Proposed self-supervised training strategy is straight-forward and useful. It also reduces the need of human annotations.
Clarity: the paper offers a smooth writing and is easy to follow.
Weaknesses: - The motivation of using diffusion models v.s. GANs is not clearly stated. Why would the authors prefer to use diffusion model (e.g., Stable Diffusion)?
- The paper lacks some failure case analysis. For example, DiffUTE relies on pretrained OCR detector. What if the OCR detector compromises in some cases?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Writing:
- Line 129: it is better explicitly to explain what $x_m$ is at the first place.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors emit some ethical discussions in the paper. For example, the authors should discuss the misusage of the technique for misinformation spread.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and detailed review. We are encouraged that the reviewer find that our DiffUTE 'shades an interesting perspective to edit text using pre-trained Stable Diffusion model' and 'interaction module is interesting and easy to use'. We would like to response as below to adress your remaining concerns.
> [W1] The motivation of using diffusion models v.s. GANs is not clearly stated. Why would the authors prefer to use diffusion model (e.g., Stable Diffusion)?
Previously, most text editing work used GAN or simple CNN as network structures, which focused mostly on low-resolution, simple background English images and performed poorly in editing texts while maintaining text style and background consistency. Recently, diffusion models have achieved remarkable results, performing well in style preservation and detail texture. We conducted a feasibility analysis of zero-shot text editing based on GAN models and diffusion models and found that the diffusion model has great potential and can almost restore the shape of simple text in some editing tasks. Therefore, we chose the diffusion model because of its powerful generation ability as well as its controllability and scalability.
> [W2] The paper lacks some failure case analysis. For example, DiffUTE relies on pretrained OCR detector. What if the OCR detector compromises in some cases?
Thank you for your professional comments. Here we provide an analysis of some failed examples. Since DiffUTE relies on OCR-extracted boxes for text editing, there is no way to edit the text when the box is inaccurate, as shown in global response PDF (Figure S1-S3). However, when the OCR box can cover the text well, DiffUTE can usually perform editing work well. DiffUTE itself may also fail to generate text accurately in some scenarios. For example, when there are too many Chinese characters to be edited (more than 6), it is difficult to generate them accurately due to the complexity of Chinese character generation. Some studies that focus on font style generation only compare the performance of generating individual characters. In the future, we will consider how to edit long text sequences.
> [Q1] Line 129: it is better explicitly to explain what x_m is at the first place.
We would like to clarify that we have stated in line 83: "by the concatenation of latent image vector $z_t$, masked image latent vector $x_m$, and text mask $m$.", and also visualized the corresponding image in Figure 3.
> [L1] The authors emit some ethical discussions in the paper. For example, the authors should discuss the misusage of the technique for misinformation spread.
First, our model aims to improve users' efficiency and accuracy in image processing, especially when editing large amounts of text quickly and with high quality. Our goal is to provide users with better tools, not to spread false information. Secondly, we recognize that any technology has the risk of being misused, including our model. We will discuss this issue in our paper and propose some suggestions to minimize this risk. For example, we may recommend that regulatory agencies and social media platforms take measures to identify and combat false information, as well as provide necessary education and training to the public. At the same time, using the data we generate, we can also help improve the detection performance of false information detection models. In fact, we are also committed to developing an AI-generated content detection model for regulating generated content. Finally, we hope that users and society can be aware of the potential risks of this technology and exercise responsibility and caution when using it. We hope that our technology can bring more benefits to society rather than harm.
---
Rebuttal Comment 1.1:
Comment: I appreciate your answers to my questions. After reading your rebuttal, I prefer to keep my original rating. Please try to include your failure case discussion and ethical discussions in the revised paper. | Rebuttal 1:
Rebuttal: We thank the reviewers for the positive reviews and constructive feedback. We thank the AC, SAC and PC for facilitating the review process.
It is very encouraging to hear from the reviewers that:
- Performance of DiffUTE: “exhibits impressive editing performance; has strong ability to accurately infer text styles and generate corresponding images; demonstrates improved performance on various evaluation metrics; ” [8YaX, jbcg]
- Usability of DiffUTE: "Leveraging LLM, the model offers broad applicability across many possible application scenarios; the interaction module is interesting and easy to use; the incorporation of LLM into the inference process is a compelling and intriguing approach; " [8YaX, sxrD, Rgqn]
- Motivation of this work: "tackle a meaningful task; addressed a realistic problem that current diffusion models struggle to handle effectively” [jbcg, Rgqn]
- Paper writing: "writing is clear and the motivations seem sound; paper is organized clearly and is easy to read" [FkEt, 8YaX]
We provide clarifications to each of the queries from the reviewers as response to each of the reviews. We sincerely hope that our DiffUTE would be positively received, considering its value addition to the community. We provide in the PDF attachment examples that the reviewers requested to be supplemented.
Pdf: /pdf/6b261d9308f2894cf44b2b68a0896382bfc02f3b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces DiffUTE, an innovative diffusion-based text editing framework designed to seamlessly fill in missing words in an image with user-specified text. By employing a self-supervised training framework, the model effectively learns from an extensive collection of synthetic data pairs, enabling it to infer accurate text styles and generate images that seamlessly incorporate the desired text. Experimental results showcase remarkable qualitative text editing performance from both the model's precision in both text and style accuracy. Additionally, quantitative analysis shows that the proposed method surpasses the performance of baseline approaches.
Strengths: + The proposed method exhibits impressive editing performance, as demonstrated through extensive experiments. It displays a strong ability to accurately infer text styles and generate corresponding images.
+ Leveraging LLM, the model offers broad applicability across many possible application scenarios.
+ The paper is organized clearly and is easy to read.
Weaknesses: - Quantitative metrics for style: Although the paper effectively showcases the model's ability to generate text that is stylistically consistent with the rest of the images, it does not provide a quantitative analysis or specific metrics to support this claim.
- Alternative diffusion-based baselines: While ControlNet is a powerful diffusion-based editing framework, it is not specifically designed for text editing tasks. Another recent diffusion-based editing approach, DiffSTE[1], shares similarities with this work as it focuses on specialized text editing and exhibits commendable performance. How does the proposed method compare to DiffSTE in terms of performance?
[1] Improving Diffusion Models for Scene Text Editing with Dual Encoders.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Are there any metrics available for assessing the accuracy/consistency of text style? Or potentially this can be validate through a human study similar to the Cor metrics present in paper.
2. Performance comparison with other diffusion-based text-editing method, DiffSTE?
3. An intriguing aspect of this work is its remarkable ability to accurately infer text styles, even when multiple possible texts are present within an image. For instance, in Figure 4, column 1, we can observe that the imprinted time exhibits the correct style (black), despite the presence of additional red texts. It raises the question of how robust this achievement is. Specifically, does the model consistently succeed when confronted with multiple text categories? Furthermore, can this success be attributed to different types of characters, such as numerical values versus Chinese characters, as depicted in this example?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: This paper clearly discusses the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and detailed review. We are encouraged that the reviewer find that our DiffUTE 'exhibits impressive editing performance, as demonstrated through extensive experiments' and 'leveraging LLM, the model offers broad applicability across many possible application scenario'. We would like to response as below to adress your remaining concerns.
> [W1&Q1] Quantitative metrics for style: Although the paper effectively showcases the model's ability to generate text that is stylistically consistent with the rest of the images, it does not provide a quantitative analysis or specific metrics to support this claim. & Are there any metrics available for assessing the accuracy/consistency of text style? Or potentially this can be validate through a human study similar to the Cor metrics present in paper.
It is difficult to quantify the effect of the style directly, but we can provide the results of a user study. Specifically, we randomly select 100 images from our Web dataset. Given each image, we can obtain 4 edited results including 3 baselines and our method. We invited 50 users to identify the edited text style in each group that they felt was most similar to the original image.
Finally 20,000 comparison results are collected, followed by using the Bradley-Terry (B-T) model [1] to calculate an overall ranking of all methods. As presented in the following Table, our DiffUTE achieves the highest B-T score.
| Method | B-T Score |
| :---: | :---: |
| SRNet | 0.1140 |
| SD2-FT | 0.1545 |
| DiffSTE | 0.3378 |
| DiffUTE | 0.3937 |
> [W2&Q2] Alternative diffusion-based baselines: While ControlNet is a powerful diffusion-based editing framework, it is not specifically designed for text editing tasks. Another recent diffusion-based editing approach, DiffSTE[1], shares similarities with this work as it focuses on specialized text editing and exhibits commendable performance. & How does the proposed method compare to DiffSTE in terms of performance? Performance comparison with other diffusion-based text-editing method, DiffSTE?
Thank you for your professional comments. We have made a detailed comparison with DiffSTE on the validation set. As shown in the table below, our method performs better than DiffSTE on all datasets, which may be due to the use of instructions to control image editing in DiffSTE. Obviously, glyph-based control conditions can provide more spatial information. In addition, DiffSTE only supports English text editing and does not perform well on more difficult Chinese text editing. (The Web dataset contains text in various languages, mainly English and Chinese.)
| Model | Avg.-OCR | Avg.-Cor |Web-OCR | Web-Cor | ArT-OCR | ArT-Cor | TextOCR-OCR | TextOCR-Cor | ICDAR13-OCR | ICDAR13-Cor |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| DiffSTE | 74.30 | 75 | 48.55 | 50 | 82.72 | 84 | 84.85 | 85 | 81.48 | 81 |
| DiffUTE | **85.41 (+11.11)** | **85.5 (+10.5)**| **84.83 (+36.28)** | **85 (+35)** | **85.98 (+3.26)** | **87 (+3)** | **87.32 (+2.47)** | **88 (+3)** | **83.49 (+2.01)** | **82 (+1)** |
> [Q3] An intriguing aspect of this work is its remarkable ability to accurately infer text styles, even when multiple possible texts are present within an image. For instance, in Figure 4, column 1, we can observe that the imprinted time exhibits the correct style (black), despite the presence of additional red texts. It raises the question of how robust this achievement is. Specifically, does the model consistently succeed when confronted with multiple text categories? Furthermore, can this success be attributed to different types of characters, such as numerical values versus Chinese characters, as depicted in this example?
This is indeed an interesting question. In order to further understand the generation ability of DiffUTE, we have provided more examples in this global response PDF. As shown in the figure S4, when the image is filled with a single Chinese character, DiffUTE can also infer the text style based on the surrounding text. Furthermore, as shown in the figure S6 , the target we input for modification is "13", but DiffUTE generated a result with the addition of "元" based on its understanding of surrounding information. This demonstrates that DiffUTE has a certain degree of document understanding ability, and can infer the required text style based on contextual information in document data. This reasoning and learning ability can also be observed from the image S4. DiffUTE infers the angle of the text to be filled based on the posture of the surrounding text, so that the inclination angle of the text is consistent with other relevant fonts around it. We believe that the reasoning ability of DiffUTE comes from the training on a large amount of data, from which it learns some structured information. Of course, it is not always able to accurately infer and there may be situations where the style does not match the expectation, but there is no perfect model after all.
References
[1] A comparative study for single image blind deblurring. In CVPR, 2016.
---
Rebuttal 2:
Title: Thank you for your replies
Comment: Thank you for the detailed replies. The additional information clarifies my previous concerns:
[A1 for previous W1,Q1]: The subjective study quantitatively shows that DiffUTE can synthesize characters with good style consistency. I agree that it's difficult to quantify the text editing performance using existing metrics. Therefore, I second with reviewer FkEt that it would be great to also include the commonly used metrics, so that readers can understand the performance from different perspectives uncovered by different metrics. It is glad to see DiffUTE also performs good on these metrics based on your reply to reviewer FkEt.
[A2 for previous W2,Q2]: The experiment shows DiffUTE outperforms STOA diffusion-based text editing framework.
[A3 for previous Q3]: The attached examples hint that DiffUTE has the ability to infer style information (angle, font) from the context, even in a more challenging scenario when numbers and text need to be predicted simultaneously.
I appreciate updates made to the paper. From my perspective, while the latent diffusion has been shown effective in many generation tasks, generating scene text is still kinds of difficult, especially in the publicly available diffusion models (e.g. stable diffusion). DiffSTE indeed shows good text editing performance. Therefore I would like to maintain my current score. | null | null | null | null | null | null |
Incentives in Private Collaborative Machine Learning | Accept (poster) | Summary: This paper studies the problem of learning a machine learning model collaboratively under differential privacy and the incentives for parties to participate in the effort. More specifically, in the papers setting, multiple parties are sharing private sufficient statistics to a central aggregator. DP induces noise to the sufficient statistics and parties with less strict privacy guarantees contribute more signal to the inference than the ones with strong privacy guarantees. Therefore, authors argue that there should be an incentive structure in participating in the scheme which rewards the parties with less strict privacy guarantees more. Besides accuracy, authors consider fairness as another desired property from the collaboration. Authors propose a method, that learns a posterior distribution for the parameter of interest using the shared private suff. stats., and releases tempered posterior samples to each client. The level of tempering will determine how close the samples are to the true posterior, and parties with stricter privacy guarantees get samples from more flattened (less tempered?) posterior. The level of tempering is selected by the central aggregator based on the Bayesian surprise which measure how much more information the parties data brings in (measure in terms of the KL divergence against the prior). Authors demonstrate empirically, that a party's reward from collaboration is a better model than simply using their own posterior learned from local data under DP. Furthermore the results show that the reward mechanism is able to give larger rewards for looser privacy guarantees. Finally, authors demonstrate that the proposed reward control mechanism outperforms an existing one (i.e. gets larger benefits from the collaboration), especially when the reward is low.
Strengths: I believe the setting where multiple data holders would like to collaboratively learn from sensitive data is very realistic, and proposing a reasonable incentive structure for participation is a valuable goal. The proposed structure, that promises more reward for less perturbed data, sounds reasonable to me. The theoretical properties (especially V2 and V3) of KL as a valuation function are novel as far as I know, and give a solid theoretical foundation for using it. Using tempering to control the reward is also novel and sounds like a reasonable choice.
The empirical evaluation over multiple models and data sets gives support for the incentive structure, showing that a party can gain from the collaboration and as the $\epsilon$ grows they are rewarded more.
Weaknesses: The method relies on a rather strong trust model that the parties contribute honestly to the collaboration. As the surprise, which determines the valuation for a party, is computed as the KL-divergence between the parties posterior and the prior, it would be quite easy for a party to send bogus data to the mediator that would yield a large KLD (especially if the prior is know). Authors do address this limitation in Remark in line 172 (and extensively in Appendix I), but I would encourage authors to extend this discussion in the main paper as well.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - You say that you can optimize the $\kappa_i$ to control $q_i(\theta)$ using any root-finding algorithm. However, to solve the $q_i(\theta)$ for a given $r_i^*$ you would need to optimize the KL-divergence between the tempered posterior and the prior right? As you say in Section 3, this KLD is intractable, and hence you need to deploy MCMC methods to evaluate it. How computationally expensive is this? Would you need to run the entire MCMC chain for each $\kappa_i$ value from scratch?
- Figures 2a and 2c: is the difference between $v_2$ and $v_N$ on the largest $\epsilon$ due to some numerical issue in valuating the party? Or are they supposed to overlap if $r_2 = v_N$?
- Minor comment: On line 238, you say that "... party k will now enjoy $(1/\tau)$-DP guarantee ...". I guess you don't really mean this as pure DP guarantee, but as an RDP guarantee? If so, it would be good to add $\lambda$ in the privacy notation (and call it RDP instead of DP).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations of the method are discussed in the main paper and more thoroughly in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed summary of our paper and comments. We will address some of the weaknesses and questions below and include them in the revision of our paper.
> Weaknesses.
Thank you for referring to Appendix I for our discussion on the truthfulness assumption. We will find the space to transfer part of the discussion to the main paper.
> Q1 on optimising $\kappa_i$
Yes, we run the entire MC sampling for each $\kappa_i$ value from scratch. The computational complexity is given in Appendix F. Importantly, the number of runs is only a constant factor of Step 3.
We think that our mechanism is computationally practical and can be scaled to more parties.
When there are more parties, the mediator can sample coalitions according to [25, M, Z] to approximate the Shapley value (in step 3 of App F) within a desired absolute error.
In the reward phase (step 5 of App F), the $(\kappa_i, r_i)$ pairs evaluated for root-finding of party $i$ can be reused to identify a narrower range for root finding for the other parties, thus reducing the number of evaluations.
[M] Maleki, S., Tran-Thanh, L., Hines, G., Rahwan, T., & Rogers, A. (2013). Bounding the estimation error of sampling-based Shapley value approximation. arXiv:1306.4265, 2013.
[Z] Zijian Zhou, Xinyi Xu, Rachael Hwee Ling Sim, Chuan Sheng Foo, and Bryan Kian Hsiang Low.Probably Approximate Shapley Fairness with Applications in Machine Learning. AAAI, 2023.
> Figures 2a and 2c:
In Fig. 2a and 2c, for the largest $\epsilon$, we have (1) $r_2 = v_N$ and (2) party 2 model reward's value $r_2$ is slightly larger than the value of its perturbed SS $v_2$.
Recall that the largest $\epsilon$ corresponds to weaker privacy and thus more information in $o_2$.
(1) As party 2 has the highest Shapley value, by fairness P3, party 2 should get a higher reward value than others. By P2, party 2 should get a model reward with value $r_2 = v_N$.
(2) Party 2 model reward's value $r_2$ is larger than the value of its perturbed S $v_2$ as the model reward is additionally trained on the data from party 1 and 3. However, as party 1 and 3 selected moderate privacy guarantee $\epsilon_1 = \epsilon_3 = .2$, the increase is small.
> Minor comment
Yes, it should be the RDP guarantee. We will make the correction in the paper!
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my questions! I'm happy to keep my score as is. | Summary:
This research paper investigates the intersection of data sharing incentives and privacy concerns within the realm of collaborative machine learning (ML). Collaborative ML aims to improve model quality by leveraging diversified data from multiple parties, yet the potential benefits of this practice are often hampered by concerns about privacy and the costs of sharing data.
Several existing studies have acknowledged the need for incentives that encourage collaboration, such as guaranteed fair rewards for valuable data contributions. Yet, these incentive-based rewards expose the parties to privacy risks. Meanwhile, some solutions enforce differential privacy (DP) to mitigate privacy concerns but may inadvertently compromise the perceived fairness of collaboration and group welfare. This research fills the gap by proposing an incentive-aware, privacy-preserving reward scheme.
The authors address several questions in their investigation. They consider the impact on valuation and reward if a party demands stronger DP guarantee, suggesting that such a party's reward should generally decrease to avoid randomness due to DP noise. They propose a privacy-valuation trade-off and explore ways to value a party's data, focusing on the quality of inference of model parameters under DP. Finally, they detail a reward scheme designed to maintain privacy, individual rationality, and fairness.
The paper's significant contributions include the development of a new privacy-valuation trade-off criterion, the proposal of novel incentives, and the introduction of reward control mechanisms to adjust the distribution of posterior samples of model parameters among different parties. These solutions aim to preserve similarity to the grand coalition's model while deterring excessive DP.
Strengths: A key strength of this paper is its addressing of a highly significant, real-world issue. Collaboration between diverse parties is a critical aspect of machine learning. However, progress in this field is often hindered by concerns surrounding privacy. Therefore, the paper's proposition of a framework that integrates privacy within the incentive scheme of a collaborative process is not just beneficial, but essential. It effectively merges theoretical constructs with pragmatic applications, fostering advancements in secure, collaborative machine learning practices. Moreover, the paper has good organization and clarity.
Weaknesses: While the paper might appear to lack novelty as it adopts numerous techniques from previous work in collaborative machine learning, it does so within the context of differential privacy. This presents a unique perspective. The process of designing incentives that take into account privacy considerations necessitates an innovative approach.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: no questions
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging feedback! We appreciate your detailed and accurate summary of the contribution of our work and the strengths. We also appreciate that the reviewer has recognized the novelty of our work as the process of designing incentives that take into account privacy considerations. This necessitates an innovative approach combining numerous techniques, which is the main thrust of this paper. | Summary: This paper proposes a mechanism to do single-round private collaborative model learning by several agents, each with access to a dataset. The agents do not want to share their data and instead exchange perturbed sufficient statistics of their data, which the central server must aggregate and learn from. Since the devices benefit from collaboration, they might be incentivized to add excessive noise to their sufficient statistics to get the maximum privacy possible while still getting some benefit from collaboration. While this might benefit specific agents, it might not be the ideal scenario from the perspective of the majority of agents. This paper's mechanism disincentivizes this behavior by providing different models to each client, based on their differential privacy requirements, with better models to agents with "most useful" sufficient statistics. The rewards provided by the server also satisfy other desirable properties such as fairness, individual rationality, group welfare, etc.
Overall I like the paper. It is a good step towards reconciling federated learning with agents' strategic behavior. Apart from complaints about presenting the overall scheme, and other minor writing issues, I do not have major concerns about the paper and recommend accepting it. I would recommend the authors incorporate the suggestions below to improve the exposition. I am open to increasing my score.
Strengths: The paper addresses a significant problem: agents' strategic and self-serving behavior in federated learning. Federated learning hopes to enable large-scale privacy-preserving collaborations and encourage users to share the benefits of their data to develop an overall better model. Most incentive designs work with explicit monetary rewards, but without an actual server, that is not doable. This paper bypasses the issue and designs model rewards that satisfy several desirable properties. The paper is also quite exhaustive in its treatment of which desirable properties must be satisfied at each step.
Weaknesses: 1. The paper is written in a bottom-up approach, i.e., from sufficient statistics to the final model. It is OK to write it this way, but the overall scheme of things is unclear after reading the paper. I know figure one is an attempt to summarize everything, but an actual pseudocode would be helpful to show explicitly which quantities are being computed, how sampling occurs, etc. It would be helpful to also indicate in the pseudocode which sampling steps are approximated using MCMC procedures. For instance, currently, the way it's presented is unclear up to section four; why is the paper talking about coalitions? An informed reader might guess it is due to Shapley value computations, but otherwise, it seems a bit arbitrary. Writing the paper top-down or adding the above pseudocode might clarify these aspects.
2. The individual rationality definition in P4 seems incorrect. Why would the agent perturb the sufficient statistic if they are alone? Collaboration with perturbed statistics should be better than unperturbed individual effort.
3. Finally, the complexity of computation/sampling will also become explicit by being more precise about the exact computations. Shapley value computations are generally expensive, and discussing the effect of inexactness in the procedure would be good.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See comments above. Can actual individual rationality be satisfied by the mechanism?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper should emphasize the scope of the work. Most federated learning happens over several interactions and involves local processing on the agent. There might be some applications where a single round of interaction between the server and the clients is enough, but that is an exception, not the norm. It is unclear if the mechanism provided in this paper can be applied repeatedly in some manner. I understand this extension might be complex, but not discussing it is incorrect. I urge the authors to list potential applications they have in mind in the motivation. As mentioned above, the authors should be more explicit about the exact computations and the complexity of running the mechanism. Finally, the individual rationality requirement in P4 seems wrong. If this can not be changed, it should be emphasized strongly that this is a limitation of the approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your helpful comments and suggestions! We have responded to them below and hope it will improve your opinion of our work.
>Weakness 1
Our paper is written in a bottom-up approach and defers information to when it is needed or the appendix: App.A.3 contains the pseudocode for sampling, App.B (lines 684-94) contains information for readers less familiar with data valuation and collaborative ML, and App.F describes the main steps of our scheme and how various quantities should be computed. We agree that it is useful to give an overview (e.g., to clarify why coalitions are mentioned) in the main paper and will add one to Sec.2.
We have attached an overview of the quantities and steps involved in our private collaborative ML scheme in the global pdf.
>Weakness 2
An agent will perturb the sufficient statistic (SS) when alone to protect the privacy of data owners from curious users of its ML model. For example, a hospital would not want its doctors to infer much about any patient’s data and a firm would not want employee users to infer about customers. This motivation applies to existing DP works like DP-SGD [Abadi, 2016] and DP noise-aware inference [Bernstein & Sheldon, 2018; Kulkarni et. al, 2021]. We will clarify the above in our paper.
We think that P4 is correct as (1) from the mediator's perspective, party $i$'s submitted perturbed SS is only worth $v_i$ and rationality is defined according to conventions of game theory, (2) each party may still want DP when alone, and (3) it seems natural for $i$'s reward value $r_i$ to be less than the value of its exact SS $s_i$ if $i$ selected an excessively strong DP guarantee (e.g., and collaborator $j$ only has one data point).
> Can actual individual rationality (AIR) be satisfied by the mechanism?
We assume that AIR means that each party $i$'s reward value $r_i$ is at least the value of its unperturbed SS $s_i$. AIR may be satisfied when parties are incentivized enough to select a large ϵ (weak DP). However, AIR cannot be theoretically guaranteed by the mechanism as parties are still free to seek stronger DP (footnote 2) that reduces the benefit of the collaboration and the mediator cannot access the private $s_i$ to generate $i$'s model reward.
The mediator should use party $i$'s perturbed SS $o_i$ to generate its model reward to incentivize $i$ to be truthful and contribute more valuable information (Q2 in App.I).
However, P4 can be theoretically guaranteed if the $𝜌$ used in the $𝜌$-Shapley fairness scheme is $\leq 𝜌_r$ defined in Sim et al's Theorem 1. We will highlight that the stronger AIR has not been theoretically guaranteed as a limitation.
>Weakness 3
We have been more precise about the exact computations and time complexity in App.F. In our revision, we will further reference the citations/pseudocode for computing the local SS, clarify that step 3 has to be repeated for each coalition $C \subseteq N$, and step 5 uses the scaled perturbed SS $\kappa_i o_j, \kappa_i c_j, \kappa_i Z_j$ for each party $j \in N$ for noise-aware inference.
We would add that: when there are <6 parties (as in our experiments), it is feasible to compute the Shapley value (SV) exactly. When there are more parties, SV has to be approximated and the inexactness (absolute error in the SV estimate) can be controlled by sampling enough coalitions and in accordance to steps outlined by [25, M, Z].
[M] Bounding the estimation error of sampling-based Shapley value approximation. arXiv:1306.4265, 2013.
[Z] Probably Approximate Shapley Fairness with Applications in Machine Learning. AAAI 2023.
> Limitations: The paper should emphasize the scope of the work.
In Sec. 2 & 8, we clearly stated that our work only covers the scope of Bayesian models with SS.
Our method would still work if a party submitted perturbed SS in _multiple_ rounds of interaction instead. The value of party $i$ (coalition $C$) should be the Bayesian surprise of party $i$'s (coalition $C$'s) SS from all rounds and party $i$'s model reward involves scaling all perturbed SS across rounds by $\kappa_i$. The mediator can reward each party only once at the end of the collaboration and make either of the following modifications to use our scheme:
- Sum the perturbed SS across rounds for each party $i$ (due to Line 101)
- Replace $o_N$ on Line 261 with the set of perturbed SS across rounds and modify noise-aware inference (Algo 1) by adding an extra for loop for different rounds
We will include this discussion in our revised paper.
---
Rebuttal Comment 1.1:
Comment: > We have attached an overview of the quantities and steps involved in our private collaborative ML scheme in the global pdf.
Thanks, this is indeed helpful.
> An agent will perturb the sufficient statistic (SS) when alone to protect the privacy of data owners from curious users of its ML model. For example, a hospital would not want its doctors to infer much about any patient’s data and a firm would not want employee users to infer about customers.
Please clearly discuss scenarios where the data owner will also perturb the statistics individually in the revised version. This is an important point.
Can the paper's guarantees be extended to the setting with the stronger individual rationality definition, i.e., when a single agent does not perturb the statistics? From the response, it seems not. Note that while studying privacy-utility trade-offs in federated learning, a collaborative algorithm is usually expected to perform better than the non-collaborative baseline, which does not have to care about privacy. For instance, while doing collaborative Gaussian mean estimation with non-strategic agents, assuming low data heterogeneity, agents will benefit from collaboration as long as the privacy noise level is comparable to the inherent variance of a consensus mean estimate. This was the intuition behind my question: can the work highlight the utility of collaboration when agents do not need to add any noise on their own?
> We have been more precise about the exact computations and time complexity in App.F. In our revision, we will further reference the citations/pseudocode for computing the local SS, clarify that step 3 has to be repeated for each coalition ....
This would indeed improve the presentation, thanks.
> The mediator can reward each party only once at the end of the collaboration and make either of the following modifications to use our scheme:
I am not sure I understand the scheme and if it clarifies my concern. When I said that interaction happens across several rounds in most federated learning applications, I meant the agents also get to see a sequence of models instead of just one final model. From what I understand, the mediator only gives the agents a single model reward even with the modification. This is a collaborative learning model, but I am unsure if calling this setup federated learning is appropriate. This seems more akin to distributed estimation literature with privacy constraints. The real challenge with maintaining privacy across multi-round interactions is adding more noise which might lead to a worse privacy-utility trade-off. Unfortunately, most works dealing with incentives in federated learning do not consider a multi-round interaction.
---
Reply to Comment 1.1.1:
Comment: Thank you for your quick and detailed reply! We will respond to your follow-up questions below.
> Please clearly discuss scenarios where the data owner will also perturb the statistics individually in the revised version. This is an important point.
Thanks for the suggestion. We will better clarify why and when the data owner will perturb the statistics individually in the revised version.
> Can the paper's guarantees be extended to the setting with the stronger individual rationality definition, i.e., when a single agent does not perturb the statistics? can the work highlight the utility of collaboration when agents do not need to add any noise on their own?
We believe you are clarifying if we can guarantee stronger individual rationality (SIR/AIR) , i.e., the model reward $q_i(θ)$ trained on perturbed SS is more valuable than the party $i$’s model trained on its exact SS $s_i$. The short answer is no. To add to our previous response on AIR, the mediator only incentivizes the participants against selecting excessively strong DP guarantees. As the mediator does not restrict the maximum DP noise added by each party, the mediator cannot control the privacy-utility tradeoff and guarantee SIR. Guaranteeing SIR is a non-trivial challenge left for future work.
However, we have considered the following modification to the reward mechanism to guarantee SIR. Instead of rewarding model parameter samples, the mediator can reward each party with perturbed SS $t_j^i$ (for Sec. 5.1) or $κ_i \boldsymbol{o}_j, κ_i c_j, κ_i Z_j$ (for Sec. 5.2) for every other party $j$. Then, each party $i$ is free to use its rewards and its own unperturbed SS $s_i$ for inference, thus achieving SIR. However, we did not go with this alternative mechanism as it faces incentive issues — as party $i$’s model reward would not be directly influenced by its submitted $o_i$, it may be less deterred (hence more inclined) to submit less informative or fake SS (see Question 2 in App. I).
Thanks for identifying an interesting and important point about strong IR. We will empirically evaluate if strong IR has been achieved, and include the theoretical limitation and above discussion in our paper. We wish to convey that the limitation is acceptable when parties care about privacy even when alone. Even when parties do not, the limitation is needed to incentivize parties to submit (i) informative and real perturbed SS that they are willing to use, while (ii) not compromising for weak DP guarantees.
> When I said that interaction happens across several rounds in most federated learning applications, I meant the agents also get to see a sequence of models instead of just one final model. From what I understand, the mediator only gives the agents a single model reward even with the modification.
Your understanding of our rebuttal is correct --- we suggested that the mediator gives the agents a *single* model reward after aggregating SS across rounds.
To clarify the second modification in our rebuttal on how to use our mechanism repeatedly, consider two parties (subscripted) who take part in $t$ rounds (superscripted). At each round $t$, each party will only submit perturbed SS generated from _new_ data. The mediator can use Algo1 to compute the noise aware posterior $p(θ|o^1_1, \ldots, o^t_1, o^1_2, \ldots, o^t_2)$ and use it to replace $p(θ|o_1, o_2)$ from the single-round setting. Valuation and reward can be done as before. Note that although more noise is added in the single round setting to generate $o^1_i, \ldots, o^t_i$, the mediator has more fine-grained information from considering them as separate variables.
Interestingly, the post-processing property of DP will allow the mediator to use the perturbed SS $o^i_t$ from round $i$ to generate model rewards in rounds $i, \ldots, t$ without any additional privacy leakage.
> This is a collaborative learning model, but I am unsure if calling this setup federated learning is appropriate.
We agree with you that our focus on Bayesian models with SS does not fit the standard federated learning setup. In the paper, we have reiterated the focus on models with SS and “collaborative ML” in Sec. 2 and 8. Although we discussed multiple rounds above, we will state our focus on the single-round setting in our revision.
> Challenge with maintaining privacy across multi-round interactions
We agree that in a multi-round interaction, agents may have to add more noise to their sufficient statistics and it may lead to a worse privacy-utility trade-off than a single round setting. We have also identified other challenges dealing with incentives in the multi-round setting in Sec. 7 on related works (lines 369-376). This motivated us to focus on models with sufficient statistics as SS can be easily aggregated (lines 91-93) instead.
Once again, thank you for the suggested revisions! We hope that we have clarified your concerns and we are happy to answer further questions. | Summary: From a mostly empirical angle, the paper studies a new valuation metric for incentivizing agents to share their data for collaborative ML while ensuring the data they share is Renyi-DP. Particularly, the KL-divergence between agents' prior and posterior, or the Bayesian surprise, is used as the value of the (possibly perturbed) observations received by the agents. Finally, the proposed method is validated on a synthetic dataset, a regression dataset, and a classification dataset.
Strengths: 1. The experimental results are explained carefully, in detail, and the figures emphasize the key takeaways from the paper.
2. Overall the paper is easy to follow and nicely presented.
3. Ethical concerns and potential societal impact are discussed in depth, and in detail, in the appendix.
4. The problem being studied is an interesting one and especially timely given current conversation on privacy and ethical concerns of large scale ML models.
Weaknesses: 1. While the paper is mostly empirical and should be assessed accordingly, the proposed method still lacks mathematical justification. For instance, it is unclear for what choices of $\rho$ (or if any) would IR be satisfied for all agents. Quantitative analysis of the desiderata for different values of $\rho, \lambda, \epsilon$, especially quantifying the interplay between $\rho$ and $(\lambda, \epsilon)$ Renyi-DP, would strengthen the results. At the current stage, it is unclear under what kind of problem settings or privacy constraints would the proposed mechanism be feasible.
2. In the same vein, desiderata **P5** and **P6** are only discussed intuitively. For ML practitioners with money at risk, additional discussions (and perhaps mathematical or game-theoretic justifications) would be more persuasive.
3. While Appendix I, Q5 has partially addressed the relationship between the work and [Sim et al., 2020], as discussed above, it is not clear how **P5** fit in the existing theoretical framework around Shapley values. Similarly, while the tempering technique has not been introduced in prior works, the mathematical or game-theoretic implications of $\kappa$ are not addressed, and only an intuitive discussion is provided in the appendix.
4. The potential application of the method seem limited in domain. The method does not appear to be generalizable outside of Bayesian linear regression. Moreover, calculating the proposed values already entails a costly MCMC inference procedure in order to evaluate the Baeysian surprise. It is unclear if such procedures would remain practical even for smaller neural nets. (While the appendix has discussed various approximation schemes, it may be difficult to persuade practitioners to adopt the method without additional mathematical guarantees on how robust the proposed desiderata are to the estimation errors in both KL and posterior).
5. Desiderata **P2** is a strict relaxation of the weak efficiency constraint (R3) in [Sim et al., 2020]. Particularly it requires only one agent, as opposed to one agent in each group, to take full advantage of shared data, and may discourage agents from participating in the mechanism (especially with limited IR guarantees).
- [Sim et al., 2020]. "Collaborative machine learning with incentive-aware model rewards." International conference on machine learning. PMLR, 2020
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: 1. Can the authors explain **P1** in more detail? From my understanding, it seems like **P1** means that agents’ reward cannot reveal too much information about other agents’ data. It would be great if the authors could confirm this. It might be me but currently the definition looks a bit confusing.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors have discussed in detail the potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed review & questions! We will address your concerns below and include them in the revised paper. We hope our clarifications will improve your opinion of our work.
> W1: Math justifications
To clarify, our work includes both mathematical and empirical justifications of the proposed method. We have theoretically shown that V1-V3 (Sec.3, App.C) and P1-P2 can be satisfied.
P3-P4 can be satisfied by using the quantitative analysis of $𝜌$ in Theorem 1 of Sim et al. [2020] since we adopted their $𝜌$-Shapley reward scheme (line 209).
In our revision, we will
- refer readers to Sim et al. [2020] for analysis of the impact of varying $𝜌$;
- clarify that for **any** privacy constraint $(λ,ϵ_i)$ and problem setting (i.e., dataset), IR is satisfied for all agents if $0 < 𝜌 \leq 𝜌_r$ ($𝜌_r$ is defined in Sim et al [2020]'s Theorem 1 and computed based on $v_i$ & $v_N$). Our results should hold for any privacy constraint;
- elaborate V3: A higher $λ$ or smaller $ϵ_i$ should lead to lower valuation.
There is limited quantitative analysis of the interplay with $(λ,ϵ_i)$ as, like the properties of the dataset, they only affect the reward value and choice of $𝜌$ _indirectly_ by affecting $\\{v_C\\}_{C \subseteq N}$.
However, empirically, we have shown that IR holds for a range of $ϵ_i$'s (Figs.2,3) and additionally for a larger $λ$ in Fig.11.
> W3: how P5 fit in the existing theoretical framework around Shapley values (SV)
P5 complements the framework. The framework decides the target reward values $r^*_i$; then, collaborative ML works [46,50,ours] propose mechanisms to generate model rewards to realize the target values. Without P5, the ML practitioner may be indifferent to different model rewards $q_i(θ)$ that achieve the same target reward value. As P5 is a secondary criteria that is maximized after other desiderata have been achieved, it does not change existing SV results.
> W3: Mathematical implication of $κ_i$
In App.E.2, we mathematically prove that a smaller $κ_i$ decreases the Bayesian surprise.
In our revision, we will cite existing works (on likelihood tempering, power posterior, e.g., https://andrewcharlesjones.github.io/journal/power-posteriors.html) that discuss the implication/interpretation of varying $κ_i$ (e.g., synthetically reducing the dataset size).
In App.'s Fig.7, we empirically show that likelihood tempering leads to better similarity to the grand coalition’s $p(θ|o_N)$.
> W2: P5 & P6 justifications
**P5** We introduce $r'$ to address parties' secondary preference of similarity to the posterior $p(θ|o_N)$ and a model reward that makes similar predictions. $r'$'s definition is inspired by lines 158-60 & 724 and should decrease with stronger privacy/less data.
A preference for lower KL divergence between the posterior and $q_i(θ)$
- has precedent (e.g., it is minimised by expectation propagation in Bayesian ML);
- is empirically justified as we observe that for the same $r_i$, higher similarity (purple line) with $p(θ|o_N)$ in Figs.2f,12b,13b leads to a lower MNLP$_r$ (higher model utility) in Figs.3d-f.
**P6** Maximizing group welfare is a common concept in game theory & collaborative ML [46,50,D]. From party $i$'s perspective, a higher group welfare will either (i) increase $i$'s reward value or (ii) others' reward value. (i) is desired and (ii) is acceptable to party $i$ when fairness is still ensured.
If the above is insufficient, can you clarify what are the intended justifications?
[D] Optimality and stability in federated learning: A game-theoretic approach. NeurIPS 2021.
> W4: Applications
We described in lines 91-3 that our work applies to Bayesian models with SS, not just linear regression. We considered logistic regression in our experiments.
We acknowledge that our method cannot be directly applied to neural networks. However, it may be applicable when ML practitioners manage to generate SS. For example, ML practitioners tend to use existing large pre-trained models (e.g., VGG-16 for images) and only fine-tune the last layer(s), so the linear/logistic model would still be useful.
> Additional mathematical guarantees on how robust the proposed desiderata are to the estimation errors in both KL and posterior
P1&2 will always hold. P3 and P4 may be affected by the errors in KL and posterior estimation.
We will justify why the errors can be low:
- In App.C.3, we extensively discuss how the error in KL estimation can be reduced by taking more samples.
- Though the DP noise-aware inference works [Bernstein and Sheldon, 2019; Kulkarni et al, 2021] did not provide theoretical results about the estimation errors, they have empirically demonstrated the similarity to the non-private posterior.
- Empirically, we checked MCMC diagnostics (e.g., R-hat) to ensure that convergence to the true posterior distribution has been achieved and that the variance of KL estimation across runs is low (Tables 2,3).
> P2 vs. R3 in [Sim et al., 2020]
R3 only allows an agent per group $C$ to take full advantage of the shared data of $C \in CS$ when the coalition structure $CS$ consists of multiple disjoint groups.
However, when the coalition structure is the grand coalition $CS=\\{N\\}$ (the desired case in Sec.4 Para.2 of [Sim et al, 2020]), only 1 agent can benefit from the shared data of $N$. This is the same as P2.
In our paper, we assume that $N$ will form for simplicity. However, we can consider other coalition structures or ensure the formation by selecting $𝜌 \leq 𝜌_s$ according to Sim et al's Theorem 1.
> Q1. Explain P1.
Your understanding is generally correct.
To be specific, other agents have already revealed private information with $(λ,ϵ_k)$-DP guarantee to the mediator. Agent $i$’s reward cannot depend on more private information (e.g., ask for more data or samples of model parameters). Instead, it should only use the information already disclosed to the mediator.
---
Rebuttal Comment 1.1:
Title: Update to Rebuttal
Comment: Thank you for the detailed feedback! I am mostly convinced by the results. In particular, the discussion on theoretical justifications is greatly appreciated, particularly the part on IR and $\rho$. Looking at only [Sim et al., 2020], it wasn't clear if their results still translate to the setting here and the added discussion will be beneficial.
Additional comments on P6: Sorry for the poorly worded and unclear comment. Maximizing sum of rewards (the so-called "social welfare") itself is reasonable and used throughout economic literature. On the other hand, maximizing *total similarity* is not a common objective, nor is it found in prior research on algorithmic game theory. (I also looked into the referenced papers and cannot find *similarity maximization* as a desiderata.) As it cannot be guaranteed that the grand coalition is always formed, it is unclear how would this preference for similarity affect the group welfare.
Additional comments on P2, P5, P6: All these assumptions work great when a grand coalition is formed, but this is not always the case. Combined with the fact that P2 only guarantees the performance of a single participant (as opposed to one in each group), the concern is that groups in the "smaller coalitions" will be unfairly disadvantaged. When a grand coalition is not formed, the potential impact of the mechanism's preference for similarity is unclear.
I'd gladly raise my score if the you could provide further insights on "the impacts of P[2, 5, 6] when a grand coalition is not formed". Thank you again for the detailed feedback.
---
Reply to Comment 1.1.1:
Comment: Thank you for responding! We will add the discussion on how Sim et al. [2020]’s results translate to that in our paper in our revision.
> provide further insights on "the impacts of P[2, 5, 6] when a grand coalition is not formed".
When the grand coalition does not form, the core idea is that instead of guaranteeing the desiderata based on the grand coalition, we guarantee the original desiderata for each coalition $C$ in the coalition structure $CS$ (*). In addition, for any party $i \in C$, we use the perturbed SS $o_C$ to generate the rewards instead of the grand coalition's $o_N$. This is because $i$ has chosen to only work with $C$ and thus should not use the SS submitted by others.
We will rewrite P2, P5, P6 for the case when the grand coalition does not form below.
- **P2.** For any coalition $C \in CS$, there is a party $i \in C$ whose model reward is the coalition $C$'s posterior, i.e., $q_i(\theta) = p({\theta |o_C})$. It follows that $r_i = v_C$ (as in [Sim et al., 2020], R2). One in each coalition (group) gets the coalition's best model.
- __P5.__ Among multiple model rewards $q_i(\theta)$ whose value $r_i$ equates the target reward $r^*\_i$, we secondarily prefer one with a higher similarity $r'_{i,C} = -D\_{KL}(p(\theta |o\_C);q_i(\theta))$ to the coalition's posterior $p(\theta |o\_C)$ where $i \in C$.
- **P6.** _After_ maximizing the total reward value $\sum^n_{i=1} r_i$, the reward scheme should also maximize the total similarity $\sum^n_{i=1} r'_{i,C}$.
(*) Note that like in [Sim et al., 2020] , we assume that the coalition structure is known and do not propose how to derive or select the coalition structure. However, we can select $\rho$ according to Theorem 1 to ensure the grand coalition will form.
> On the other hand, maximizing total similarity is not a common objective, nor is it found in prior research on algorithmic game theory. (I also looked into the referenced papers and cannot find similarity maximization as a desiderata.) As it cannot be guaranteed that the grand coalition is always formed, it is unclear how would this preference for similarity affect the group welfare.
Sorry for misunderstanding your question previously. Indeed, similarity maximization is non standard and we will explain below how it should not affect the group welfare.
In the paper, we wrote that _while_ maximizing the total reward value $\sum^n_{i=1} r_i$, the reward scheme should also maximize the total similarity. We will revise the _while_ to _after_. With this modification, P6 can be satisfied by "maximizing the total reward value $\sum^n_{i=1} r_i$'' and satisfying P5.
Satisfying P5 would ensure that the mediator selects the reward $q\_i(\theta)$ with higher similarity $r'\_i$ or $r'\_{i,C}$ among rewards with the same reward value $r\_i$. Thus, it would help to maximize the total similarity $\sum^n_{i=1} r'\_{i,C}$ in P6 for the same group welfare ($\sum^n_{i=1} r\_i$) value.
As P5/similarity is a secondary criterion that is maximized after other desiderata have been achieved, it does not change existing SV and group welfare results. We will consider removing the similarity part in P6 to align with the current literature as it is already implied by P5.
We have given the modification when the grand coalition does not form above, and the reason for preferring higher similarity in the previous response.
Once again, thank you for your suggestions that will help to improve the paper. We hope the above clarifications address your concerns and we will be happy to provide further clarifications. | Rebuttal 1:
Rebuttal: We thank all reviewers for the encouraging feedback that recognizes the novelty of our work. We appreciate the high-quality reviews and valuable feedback which we will consider carefully in revising our paper. In our rebuttal, we have
- Clarified the (mathematical) justifications of P1-6 (Reviewer zMuU);
- Clarified reviewers’ doubt about the problem setting by
- Referring the reviewers to Secs. 2 & 8 which stated that our approach works for models with sufficient statistics;
- Explaining the need for DP and perturbed SS when alone (Reviewer kKLU);
- Justifying our choice of rationality in P4 (Reviewer kKLU).
- Pointed the reviewers to App.F for a discussion of the computational complexity and main steps of our scheme. We also provided an alternative diagrammatic overview in the PDF attached.
Please let us know if you have any more questions, and we would be happy to address them within our allowed period.
We would like to correct a minor typo in P5: $r’_i$ should be the negated KL divergence instead. Our subsequent theory/experiment results are not impacted.
Pdf: /pdf/ba8b6c05b16e8a80725f4bd5380a99999181f18c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Solving Linear Inverse Problems Provably via Posterior Sampling with Latent Diffusion Models | Accept (poster) | Summary: This paper proposes a method to deal with linear inverse problems using pre-trained latent diffusion models (LDM) as a prior. The main idea is to extend the original diffusion posterior sampling (DPS) to the case of LDM by approximating the gradient term of the intractable likelihood. Two approximations (GLM-DPS), and (PSLD) are proposed for such approximation, and it turns out that PSLD achieves better performances. This paper also theoretically analyzes the DPS and PSLD in a toy linear setting. Experiment results show superior performances compared with the original DPS in a variety of tasks, such as random/block inpainting, denoising, super-resolution.
Strengths: 1. It is the first work, to my best knowledge, that LDM is used to address the linear inverse problems.
2. Compared to the original DPS, the proposed PSLD achieves apparently better performances in most cases.
3. Some theoretical analysis is also provided, although it is based on some toy linear model.
Weaknesses: 1. There is a lack of analysis of the complexity, or running time, of the proposed PSLD method. Adding details of the running time of PSLD and other comparison methods is suggested.
2. Part 3 contains some theoretical analysis using a simple toy linear model. For example, Theorem 3.4 states that DPS exactly recovers the groundtruth sample. It is known that DPS uses a Laplace approximation to approximate the gradient term of the likelihood. As a result, how can it exactly recover the posterior samples with the crude approximation? This is also the case with Theorem 3.7 and Theorem 3.8. To what extent can the results of Theorem 3.4, 3.7, and 3.8 be generalized to the real diffusion models?
3. In the experiments part, while PSLD shows superior performances over the original DPS, there is a lack of comparison with other latest diffusion-model-based algorithms that were available long before NeurIPS submission, e.g.,
[A] Wang, Yinhuai, Jiwen Yu, and Jian Zhang. "Zero-shot image restoration using denoising diffusion null-space model." arXiv preprint arXiv:2212.00490 (2022).
[B] Meng, Xiangming, and Yoshiyuki Kabashima. "Diffusion model based posterior sampling for noisy linear inverse problems." arXiv preprint arXiv:2211.12343 (2022).
[C] Song, Jiaming, Arash Vahdat, Morteza Mardani, and Jan Kautz. "Pseudoinverse-guided diffusion models for inverse problems." In International Conference on Learning Representations. 2022.
4. Most previous image restoration methods using diffusion models also consider colorization, can the authors also add some results in the case of colorization? Besides, only Table 2 and Table 5 show quantitative comparisons with other methods but only for inpainting. It is suggested to add quantitative comparisons with other methods in other tasks such as super-resolution, denoising, colorization, etc.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Three additional questions:
1. How do you add the additive Gaussian noise with standard deviation \sigma_y?
2. Can I understand that the improved performance of PSLD over DPS comes solely from the improvement of diffusion models in the latent space over that in the pixel space? In other words, if DPS is implemented with more powerful diffusion models, is it possible to outperform PSLD? In this sense, how to ensure a fair comparison between PSLD and DPS?
3. Are the results of PSLD sensitive to the choice of hyper-parameters? Compared to DPS which has only one scaling hyperparameter, there are two scaling hyperparameters, namely, \eta and \gamma. It is suggested to evaluate the sensitivity of these hyperparameters.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the weakness and questions parts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer xKok
Dear Reviewer xKok,
Thank you for the review and for pointing out that **our study achieves state-of-the-art performance** in addressing inverse problems with latent diffusion models, and **unleashes the capacity of large-scale pre-trained LDMs** for sample recovery.
Below, we provide answers to your remaining comments and questions.
(1) **NFEs of different algorithms.**
Since the runtime of diffusion based posterior samplers depends heavily on the number of Neural Function Evaluations (NFEs), we add comparison of different algorithms in terms NFEs.
| |PSLD (ours)|DPS |DDRM|RED [2]|$\Pi$GDM [1]|Palette [3]|Regression|SNIPS [4]|
|:---|:----------|:--------|:--|:---|:--|:------|:---------|:----|
|NFEs|100 to 1000|1000|20 |500|20 to 100|1000 |1 |1000 |
The computational complexity of our method PSLD is similar to that of DPS. For a relatively smaller number of diffusion steps (say 100), PSLD significantly outperforms DPS. The DPS generated samples do not look like realistic images for fewer diffusion steps [1].
**Runtime of different algorithms for Super-resolution task.**
|Method | Runtime (s)|
|:-------------|:-----------|
|DMPS [6]|67.02 |
|DPS |180.00 |
|DDNM+[5]|18.5 |
|DDRM |2.15 |
|MCG |193.71 |
|PSLD-LDM|187.00 |
|PSLD-LDM (LAION-400M)^|190.00 |
|PSLD-SD (LAION-5B)* |194.25 |
*PSLD-SD (trained on LAION-5B) takes 776 s to generate 512x512 images. To compare with other methods, we divide its runtime by 4. ^PSLD-LDM (LAION-400M) uses a diffusion model trained on LAION-400M dataset. All the other methods use diffusion models trained on FFHQ and produce 256x256 images.
(2) **Exact recovery is not possible by DPS.**
Although DPS uses Laplace approximation for the gradient of the likelihood, this approximation is not necessary in a recoverable linear model setting as it can be exactly solved. The same argument also holds for Theorem 3.7 and Theorem 3.8.
(3) **PSLD is better than DPS, but no comparison with other latest diffusion-model-based algorithms available before NeurIPS deadline.**
Thanks for bringing these recent studies to our attention. Unfortunately, none of the suggested papers are **latent diffusion-model-based algorithms**. Nevertheless, we have compared with these methods, please see our answer to (7) and the attached PDF.
(4) **How do you add Gaussian noise with standard deviation σ.**
We sample $\hat{n} \sim \mathcal{N}(0,I)$ and then scale this sample by $\sigma$, i.e., $n = \sigma * \hat{n}~$ to generate the IID Gaussian noise $n \sim \mathcal{N}(0,\sigma I).$
(5) **If DPS is implemented with a powerful diffusion model, can it outperform PSLD?**
Our goal is to develop a framework to leverage the power of pre-trained latent diffusion models, such as Stable Diffusion. Our focus is not to maximize the margin that we beat the previous state-of-the-art DPS, but to unlock the potential of large pre-trained LDMs. Needless to say, the more powerful the prior the better it is for posterior sampling (e.g., classical priors to GANs and GANs to Diffusion). But previous posterior sampling algorithms (including DPS, DMPS and DDRM) only apply to pixel-space diffusion models, which leaves a gap to leverage the power of pre-trained LDMs. In this paper, we bridge this gap in literature as pointed out by Reviewer vbWT.
(6) **Robustness of hyper-parameters.**
The values of {$\gamma = 0.1$ and $\eta = 1$} are fixed for most of the practical tasks as mentioned in the appendix, justifying their robustness.
(7) **Additional quantitative Results.**
As suggested by the reviewer, we have added the following new results:
| Method | SR(4X) | | Gaussian Deblur | |
|:----------|:--------|:--------|:--------------------|:--------|
| |PSNR |SSIM|PSNR|SSIM
|PSLD (Ours)|30.73|0.867|30.10|0.843
|GML-DPS (Ours)|29.77|0.860|29.21|0.820
|DMPS[6]|27.63|-|25.41|-
|DPS|25.67|0.852|24.25|0.811
|DDRM|25.36|0.835|23.36|0.0.767
|MCG|20.05|0.559|6.72|0.051
|PnP-ADMM|26.55|0.865|24.93|0.812
|Score-SDE|17.62|0.617|7.12|0.109
|ADMM-TV |23.86|0.803|22.37|0.801
| Method | SR(4X) | | Gaussian Deblur | |
|:----------|:--------|:--------|:--------------------|:--------|
| |FID |LPIPS |FID |LPIPS
|PSLD (Ours)|34.28|0.201|41.53|0.221
|DPS|39.35|0.214|44.05|0.257
|DDRM|62.15|0.294|74.92|0.332
|MCG|87.64|0.520|101.2|0.340
|PnP-ADMM|66.52|0.353|90.42|0.441
|Score-SDE|96.72|0.563|109.0|0.403
|ADMM-TV |110.6|0.420|186.7|0.507
### Concluding Remark
Please let us know if the clarifications and additions suitably address your concerns. We are happy to address any remaining points during the discussion phase.
### Reference
[1] Song, Jiaming, Arash Vahdat, Morteza Mardani, and Jan Kautz. Pseudoinverse-guided diffusion models for inverse problems. In: International Conference on Learning Representations. 2023. url: https://openreview.net/forum?id=9_gsMA8MRKQ.
[2] Yaniv Romano, Michael Elad, and Peyman Milanfar. The little engine that could: Regularization by
denoising (RED). arXiv preprint arXiv:1611.02862, November 2016.
[3] Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D. and Norouzi, M., 2022, July. Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 Conference Proceedings (pp. 1-10).
[4] Bahjat Kawar, Gregory Vaksman, and Michael Elad. SNIPS: Solving noisy inverse problems
stochastically. arXiv preprint arXiv:2105.14951, May 2021.
[5] Yinhuai Wang, Jiwen Yu, and Jian Zhang. Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model. In: The Eleventh International Conference on Learning Representations. 2023. url: https://openreview.net/forum?id=mRieQgMtNTQ.
[6] Meng, X. and Kabashima, Y., 2022. Diffusion model based posterior sampling for noisy linear inverse problems. arXiv preprint arXiv:2211.12343.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I thank the authors for the rebuttal, especially the added comparisons with other methods, which further show the superiority of the proposed method. Please revise the future version accordingly. I have raised my score after reading the rebuttal. | Summary: This paper focuses on solving inverse problems using diffusion based probabilistic models without **retraining**. The authors build on "Diffusion posterior sampling" which basically builds a diffusion model for the posterior using only the score of the prior distribution. Indeed, the score of the posterior $\nabla \log p_t(x_t | y)$ is equal to $\nabla \log p(y|x_t) + \nabla \log p_t(x_t)$ where the second term is given by the prior diffusion model. The first term is however intractable but can be approximated by noting that $p(y | x_t) = \int p(y|x_0) p(x_0 | x_t) dx_0$. This integral is then approximated by simply taking the mean a posteriori i.e. $p(y | x_t) \approx p(y | E(x_0 | x_t))$ which itself can be approximated using Tweedie's formula and the prior score. The original DPS paper only considers the diffusion in the pixel space and this paper extends the methodology to latent diffusion models. While the direct extension is straightforward, the authors claim that it does not work in practice and they provide a rather intuitive modified posterior diffusion model that ensures consistency at the borders of the mask.
Strengths: The paper tackles an important problem and provides a sound methodology for diffusion posterior sampling for latent space diffusion models. Furthermore, the numerical experiments suggest that this method outperforms other existing methods.
Weaknesses: The fact that the authors show recovery in a 2 step diffusion model in a very basic setting proves nothing. First, the results hold only when the optimal solution is normalized. In realistic settings and with multi step diffusion one does not know in advance how to rescale (if even rescaling is the right to do?) the drift so that the diffusion model matches the target distribution. Next, the authors prove that DPS "provably" recovers the posterior but this to me is also meaningless since one can easily show numerically in a noisy inpainting setting with Gaussian mixtures (the posterior is tractable in this case) that DPS **does not** in fact sample the posterior. It samples outside the support of the posterior even in the simplest settings. I believe that the authors should provide a numerical experiment where the target posterior is known (again, Gaussian mixtures) and show that their method does sample from the posterior. To me there is no reason why it should exactly sample from the posterior and not outside the support like DPS. Indeed, this issue is not related to the quality of the generative model but more to the fact that DPS (and truthfully many of the difussion posterior sampling methods) are not exact samplers since there is no correction like in MCMC algorithms. I believe that the authors should drop the "provably" in the title as it is misleading.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What do the authors mean by the *curse of ambient dimension*, line 226-228? In DPS the gradient is indeed computed in the $d$-dimensional space but this is also the case of PSLD by the chain rule (and in fact the computational cost of PSLD is slightly larger).
- In Figure 2 right panel, an inpainting example is considered. In this case p(y|x_0) is a dirac delta so I am not sure how the authors manage to take the gradient. Do the authors approximate the dirac delta with a Gaussian with small variance? Was the same courtesy applied to DPS?
- It is quite surprising that in Algorithm 2 (but also Algo1) nothing depends on the measurement variance. Why is it not taken into account?
- While I could guess what it respresents, $x_0 *$ is not defined anywhere.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: see below.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer S3JW
Dear Reviewer S3JW,
Thank you for the review and for pointing out the **importance of our work in leveraging the power of latent-based diffusion models** in solving inverse problems with state-of-the-art performance.
Below, we provide answers to your remaining comments and questions.
(1) **Recovery in two-step process is a basic setting.**
The main purpose of the theoretical analysis is to give **intuition** (as rightly pointed out by Reviewer vbWT) on why gluing term is critical to the empirical success of PSLD. The two-step process in a linear model setting **serves this purpose without unnecessary mathematical complications**. To elaborate, (a) vanilla extension of DPS fails due to many-to-one mapping of the VAE encoder, (b) GML-DPS fails due to infinitely many fixed-points of the linear system, and (c) PSLD works due to its inherent nature to find the stable fixed-point. The theoretical analysis gave us the intuition for the gluing objective that led to contraction of the distance from optimal solution and thereby strong empirical performance. Besides, the main contribution of the paper is to solve inverse problems using latent diffusion models, unlocking the potential of large foundation models (e.g. Stable Diffusion) as rightly pointed out by Reviewer vbWT, yVXi and xKok. We substantiate our contribution with large-scale experiments typically considered in practice.
(2) **One does not know in advance how to scale.**
We would like to point out that the scaling factor is a hyper-parameter that is usually tuned in practice. Our theory provides intuitions on how to better tune this factor and rule out some unwanted experiments. Besides, we believe this is an issue of the generative model, not posterior sampling. As long as we have a pre-trained generative model that can sample from the data distribution, our method can leverage this model for posterior sampling.
(3) **DPS does not sample the posterior in noisy setting.** The setting we consider is noiseless and exactly recoverable, where we prove that DPS samples the posterior under valid assumptions. We will clarify this noiseless setting in the revised version.
(4) **Many diffusion posterior sampling algorithms are not exact samplers.**
Many diffusion posterior samplers are not exact samplers, but their empirical performance is significantly better than any of the MCMC algorithms with strong theoretical guarantees. To the best of our knowledge, these MCMC algorithms with strong theoretical results are only shown to work in toy experimental setting. **Provably in title.** To remove ambiguity, we will change the title to *"Solving **Linear** Inverse Problems Provably via Posterior Sampling with Latent Diffusion Models"*.
(5) **What do the authors mean by the curse of ambient dimension?**
As we discuss in Section 3, PSLD computes the gradients in the latent space with dimension $k$ (wrt $\mathbf{z}_i$), whereas DPS computes the gradients in the pixel space with dimension $d$ (wrt $\mathbf{x}_i$). In practice, the computational complexity of Stable Diffusion model ($\sim 4.00GB$) is higher (roughly 6 times) than the computational complexity of the encoder-decoder model ($\sim 700 MB$). Therefore, applying the chain rule in the encoder-decoder and running diffusion in the latent space is less expensive than applying diffusion models in the pixel space directly.
(6) **How to take gradients of $P(y|x_0)$ in block inpainting?**
As correctly pointed out by the reviewer, we use Gaussian approximation of the dirac delta. The same idea is applied in DPS approximation for handling block inpainting.
(7) **It is quite surprising that in Algorithm 2 (but also Algo1) nothing depends on the measurement variance. Why is it not taken into account?**
We believe there is a misunderstanding. Recall that this is a maximum likelihood estimation problem. Since $\mathbf{y} = AD(\mathbf{z})+\sigma_y\mathbf{n} $, where $\mathbf{n}\sim \mathcal{N}(\mathbf{0},\mathbf{I})$, we have $\mathbf{y} \sim P(y|z) \propto \exp{\left(-\frac{1}{2\sigma_y^2} || \mathbf{y} - AD(\mathbf{z})||_2^2\right)}$. Taking the gradient of the log-likelihood, we have $\eta\nabla_z || \mathbf{y} - AD(\mathbf{z})||_2^2$, where $\eta$ absorbs the scaling factor $\frac{1}{2\sigma_y^2}$.
(8) **Definition of $x_0^\*$.**
Thank you for pointing out the typo. We will correct it in the revision. We denote by $x_0^*$ the true underlying sample as you could have rightly guessed.
### Concluding Remark
Please let us know if the clarifications and additions suitably address your concerns. We are happy to address any remaining points during the discussion phase.
---
Rebuttal Comment 1.1:
Comment: I would like to thank your for your response. I still have some disagreements on certain points:
**DPS does not sample the posterior in noisy setting - Many diffusion posterior sampling algorithms are not exact samplers.** What I meant is that the result proven for DPS in the paper does not have any practical implication; as soon as one departs from the assumptions made in the paper, DPS fails to sample from the simplest posterior. The authors can perhaps try it on simple toy examples; take a $d$-dimensional Gaussian mixture and observe only one coordinate for example. DPS will sample inside but also outside of the support of the posterior and there is no evidence on why this shouldn't be the case of the proposed algorithm. Hence, the assumptions made in the paper are very strong and yield results with questionnable meaning.
Regarding the comment on MCMC, I strongly disagree with the conclusion. Assuming that MCMC algorithms only work on toy algorithms, methods such as DPS do not work on such toy examples and as such, there is no reason to believe that they sample approximately from the correct posterior in high dimensional examples if they fail on the simplest examples. The quantitative metrics that are usually used like the FID and LPIPS mean nothing in practice; they do not assert the quality of posterior sampling. They quantify how coherent the images are but not if they're accurate samples from the posterior. It is practically impossible to know if methods such as DPS or PSLD sample from the posterior as we do not have access to samples from the said posterior. The only way to get a rough idea is to use these methods on examples for which the posterior is available in closed form.
I have one remaining question; from my experience DPS is very sensitive to the "step-size" $\zeta_i$. It can yield very bad results when chosen inappropriately. In the main paper there does not seem to be any discussion on this matter for your methods and I believe that this is quite important. Did the authors tune the parameter $\eta$? Furthermore, I understand that in the derivations the noise std is factored into $\eta$. But what I meant is that intuitively $\eta$ should depend, in the experiments, on the noise std. It seems to me that the noise std is the same on all experiments and that this matter is not discussed.
---
Reply to Comment 1.1.1:
Title: Discussion with Reviewer S3JW
Comment: Dear Reviewer S3JW,
(1) **DPS does not sample the posterior in noisy setting.**
Thanks for the comment, we now better understand your intent. We agree with your comment on DPS, and our result only holds in the linear noiseless case under restrictive assumptions. We will emphasize this in the paper. However, this stylized theory did produce one practically useful result: We wanted to solve linear inverse problems in the latent space to leverage the power of Stable diffusion and other pre-trained foundation models. A standard way of using DPS in the latent space failed, and this led us to design the gluing objective. Specifically, even in the linear noiseless case, (latent) DPS did not recover the ground-truth (due to lack of contraction to the unique fixed point), motivating the gluing objective to fix this issue.
As we responded to reviewer yVXi, we will remove the pixel-space analysis (move it to the appendix), shorten the latent-space analysis, and emphasize it only holds for the noiseless case. We will further highlight that the analysis goal is to provide intuition on the gluing objective.
(2) **Sampling from true posterior and metrics such as LPIPS and FID.**
This is a fair point. There is no theoretical justification to assert that DPS or PSLD sample from the true posterior. However, in our paper we study the solution of inverse problems where the ground truth is known. This helps in validating the reconstruction produced by our algorithm, and thus pairwise error-metrics like MSE/PSNR and LPIPS can be used. These metrics are used, e.g. for MRI quality assessment and numerous other inverse problems, as non-perfect but still reasonable metrics to compare inverse problem solvers. Finally, as you rightly point out, there is no theory showing that metrics like FID or Inception score say anything about the quality of posterior sampling and this remains as an important research direction.
(3) **Did the authors tune the parameter $\eta$? In the main paper there does not seem to be any discussion on this matter for your methods and I believe that this is quite important.** We tuned the hyper-parameter $\eta$ to the extent possible with our available computing resources. The reported results are with the best hyper-parameter we could find. We will revise the discussion to make this clear.
(4) **It seems to me that the noise standard deviation is the same on all experiments and this matter is not discussed.** As rightly pointed out by the reviewer, the noise standard deviation is the same on all experiments. This is the same setting as the baseline DPS. We will add this discussion in the revised version.
### Concluding Remark
We are happy to address any remaining points during the discussion phase. | Summary: While the preivous methods have focused on solving linear inverse problems based on diffusion models, the paper presents a first extension to latent diffusion models (LDM). The core idea is developed upon the existing DPS method, which forms an approximation to p(y|xt) by using the denoising score estimate. The key challenges of the extension to LDM are (1) the non-bijective mapping between encoder and decoder, and (2) the inconsistency at the boundary of mask (for inpainting problems). To address these challenges, the authors propose an additional gluing term that penalizes the discontinuity at the mask boundary.
The main practical contribution of this paper is that if unlocks the potential of large pre-trained LDM (e.g. stable diffusion), and hence the proposed based based on more powerful pre-trained models enjoys superior performance compared to its counterpart with ambient / pixel space settings.
In addition, the paper also presents a theoretical analysis in a toy setting (linear two-step diffusion models) to motivate the proposed objective.
Strengths: - While the extension of DPS to LDMs seems natural, I appreciate the practical impact of the proposed methodology given the popularity and superior power of existing pre-trained LDMs.
- If I understand it correctly, doing conditional inference in latent space would also be more efficient than doing inference in the ambient space (after controlling the model complexity though in practice I wonder if that comparison can be made)
- The proposed method is also claimed to be robustness of the choice of step-size, at least in toy settings, though I am not sure how this would extend to general case.
Weaknesses: - The theoretical analysis seems to follow reference 30 closely, takes up a large portion of the paper. However, I am not sure how much insights can be carried over to the general cases. In my opinion, the proposed method stays well-motivated without this analysis. Hence. I believe either discussing the extension to general cases, or trimming this part would make a better paper.
- The evaluation part can be misleading given that pre-trained LDMs are "trained on much more data compared to the one used by DPS" (ref: appendix, caption above table 5). I think this information should be disclosed in the main paper instead of the appendix. In addition, it would be helpful to also disclose the unconditional generation performance of all backbone diffusion models, as well as the size of the models, the number of training data, etc.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Theorem 3.8: what is the value for gamma_i in Alg 2?
- Theorem 3.8 and its surrounding text: can you elaborate more on why it is robust to different values of step size eta? In the equation between line 221 and line 222 on page 7, apparently if I choose different value of eta, the resulting z0 would be different (as long as the grad norm after eta is non-zero). Am I missing anything here? In addition, in this equation, should there be a multiplicative factor \gamma before the last term, or is it set to 1 for the theorem and the statement to be correct?
- Page 7. line 228 "DPS algorithm suffers from the curse of ambient dimension". Can you elaborate on the issue? Is it mainly the efficiency issue or the approximation accuracy issue?
- Would be helpful to accomplish the theoretical analysis with toy data empirical verification.
- (If possible), would be helpful to provide results where the base diffusion models for DPS and PSLD have similar performance.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer yVXi
Dear Reviewer yVXi,
Thank you for the review and for pointing out the fact that our study proposes the **first method** for inverse problems with latent diffusion models, and **unlocks the potential** of large **pre-trained LDMs** for sample recovery.
Below, we respond to your remaining comments and questions.
(1) **Methodology stays well-motivated, trimming theoretical analysis would make a better paper.**
We will trim this section of the paper as per the reviewer's comment. Note that the main purpose of the theoretical analysis is to give **intuition** (as rightly pointed out by Reviewer vbWT) on why gluing term is critical to the empirical success of PSLD. The two-step process in a linear model setting **serves this purpose without unnecessary mathematical complications**. To elaborate, (a) vanilla extension of DPS fails due to many-to-one mapping of the VAE encoder, (b) GML-DPS fails due to infinitely many fixed-points of the linear system, and (c) PSLD works due to its inherent nature to find the stable fixed-point. The theoretical analysis gave us the intuition for the gluing objective that led to contraction of the distance from optimal solution and thereby strong empirical performance.
(2) **"Stable Diffusion is trained on much more data compared to the DM used by DPS". Move this information from Table 5 in appendix to the main body of the paper.**
As suggested, we will move this statement to the main body of the paper. However, we would like to highlight that the focus of the paper is to **unlock** the potential of large pre-trained LDMs (e.g. Stable Diffusion), not to beat DPS.
(3) **Helpful to discuss unconditional generation performance of the backbone diffusion models, dataset, size etc.**
Since most of the large pre-trained LDMs (e.g. Stable Diffusion) are maintained by commercial service providers, they keep updating their pre-trained weights, datasets, and training iterations. It is hard to compare these generative models with the pixel-space diffusion model used by DPS. Nevertheless, we report the metrics on FFHQ from the original papers and the github source code.
| Model | Dataset|Weights|Iterations |Image Size|
|:---------|:-------|:------|:----------|:---------|
|SD-v-1.5 |LAION-5B|4.00GB |840K | 512x512 |
|PSLD-LDM |FFHQ |2.40GB |635K | 256x256 |
|DPS-DM |FFHQ |358MB |1M | 256x256 |
The FID score of an earlier version of LDM is 4.98 [1]. We could not find the unconditional generation performance of DPS in terms of FID anywhere as the authors did not evaluate it for generative modeling. The goal of their paper was to study **posterior sampling** using a pre-trained diffusion model. Also, sampling from diffusion models takes much more time than sampling from other generative models, such as GANs, which could be another reason for lack of evaluation in generative modeling unless that is the main contribution.
(5) **Theorem 3.8: What is the value of $\gamma_i$ in Alg 2?**
We set $\gamma_i=1$ as it is immaterial in a linear model setting.
(4) **Intuition behind Theorem 3.8 and its surrounding text.**
The intuition is that, after every step of denoising and measurement update, we would like to solve an optimization problem to make sure that the decoded sample resides on the manifold of natural images. This is achieved by the gluing update. In practice, one step of gluing update suffices instead of exactly solving the optimization problem at every step. However, in a perfectly recoverable linear model setting, the optimization problem can be exactly solved in one step. This makes the gluing term robust against the choice of step size $\eta$.
As we discuss in line 184, the multiplicative factor $\gamma$ is immaterial in a linear model setting. Hence, we set it to $\gamma=1$. Empirically, we observe that $\gamma=0.1$ and $\eta=1$ are robust to several downstream tasks given in Appendix B (please also see newly added experiments in the attached PDF).
(6) **Elaborate on "DPS suffers from the curse of ambient dimension".**
DPS suffers from curse of ambient dimension because in this method, gradients are computed in the pixel space with dimension $d$. However, latent-based methods such as PSLD compute gradients in the latent dimension $k$, and hence the computation is more efficient. Furthermore, applying the chain rule on VAE and running diffusion in the latent space is less expensive than running diffusion in pixel space directly.
(7) **(If possible) Compare PSLD and DPS with the same diffusion model.**
Since our goal is to solve inverse problems using LDMs, we use large pre-trained LDMs (e.g., Stable Diffusion) as our generative prior. In practice, training these LDMs require a lot of computational investment and their performance is usually better than pixel-space diffusion models. Although we would be happy to compare their performance with the same diffusion model, we are not aware of any pixel-space diffusion model that is comparable to latent-space diffusion models.
### Concluding Remark
Please let us know if the clarifications and additions suitably address your concerns. We are happy to address any remaining points during the discussion phase.
### Reference
[1] Rombach, Robin, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. "High-resolution image synthesis with latent diffusion models." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684-10695. 2022.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their thorough and thoughtful response. All my concerns have been touched and addressed. | Summary: This paper investigates the use of diffusion models for solving inverse problems. While many recent papers have explored diffusion models for inverse problems, to the best of my knowledge, this study is the first to propose a method for inverse problems with latent diffusion. One advantage of latent diffusion is its lower computational demand.
The main challenge in solving inverse problems with diffusion models lies in computing the intractable likelihood term $\nabla \log p_{x_t}( y | x_t )$, where $x_t$ represents a noisy point in pixel-space. Recently, the DPS algorithm was introduced to approximate this term as $\nabla \log p_{x_t}( y | \mathbb{E}[x_0| x_t] )$. The first contribution of this paper is to extend the DPS framework to a latent diffusion model, approximating the intractable likelihood $\nabla \log p_{x_t}( y | z_t )$, where $z_t$ denotes a noisy point in latent space, with $\nabla \log p_{z_t}( y | \mathbb{E}[z_0| z_t] )$ along with an additional term that enforces $\mathbb{E}[z_0| z_t]$ to be a fixed point of the autoencoder (GML-DPS). Furthermore, the latter term is modified to enforce consistency in the measurements (PSLD). The empirical performance of the PSLD algorithm is compared with the standard DPS in high-dimensional reconstruction tasks, and improvements over DPS are demonstrated.
These three algorithms (DPS, GML-DPS, PSLD) are theoretically studied in the case of a data distribution corresponding to a Gaussian supported on a low-dimensional subspace, perfect knowledge of the subspace, zero noise in the measurements, and a measurement matrix that is bijective over the subspace. Specifically, this implies that the inverse problem can be exactly solved given the measurements. Finally, the sampling processes are two-step diffusion processes. The authors demonstrate that all three algorithms precisely recover the target signal, and PSLD exhibits robustness to variations in the specification of the step sizes.
Strengths: Solving inverse problems with (large) latent diffusion models is a relatively underexplored area, and this paper fills a gap in the existing literature. In particular, the authors demonstrate how their methods can be used with SOTA latent-based foundation models such as stable diffusion. The empirical results presented are promising and demonstrate the potential of this approach.
Additionally, the authors offer some theoretical guarantees, albeit in a limited setting. These guarantees are valuable and noteworthy, considering the field's scarcity of rigorous results, even in simple toy examples.
Weaknesses:
The main weaknesses of the paper lie in the experimental evaluations and results. Specifically, all experiments are conducted on subsets of the FFHQ dataset and images sourced from the internet. The authors evaluate the PSLD algorithm on two latent models: LDM-VQ-4, trained on FFHQ, and Stable Diffusion, trained on a significantly larger dataset (LAION). However, the comparisons are made between PSLD on these two latent models (proposed) and DPS (baseline) on a standard diffusion model trained *exclusively* on FFHQ.
The authors demonstrate improved performance of PSLD on the (latent) Stable Diffusion model compared to DPS on the standard diffusion model. However, when employing the latent model LDM-VQ-4 trained solely on FFHQ, the improvements are marginal (Table 3).
As a result, it becomes challenging to ascertain whether the observed improvements stem from algorithmic enhancements or simply the utilization of superior models trained on larger datasets, which may also account for the enhanced out-of-distribution performance.
Furthermore, the experiments do not shed light on the improvement attributable to the addition of the "gluing" term in comparison to the DPS's vanilla extension term. One might hypothesize that the "gluing" term could be employed in DPS (by removing the encoder-decoder), potentially enhancing the algorithm's performance.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Some typos and questions:
- In Section 2, the notation can be confusing. To clarify, it is suggested to use $x_0^*$ to denote the target signal and distinguish it from the first sample of the backward diffusion ($x_0$).
- For Equations (3), should it be $\hat{x}_0 = \mathbb{E}[x_0 | x_t]$?
- In Algorithm 1: line 1 what is $\mathcal{T}$?
- Equations (7) should include a $\log$ term for the DPS vanilla extension. Additionally, as shown in the supplementary material, $\mathcal{A}^\top$ should be added to (7) and line 8 of Algorithm 2.
- In Algorithm 2, it appears that the unknown target $x_0^*$ is needed, but in reality, the algorithm only requires the measurements $y$. It is advisable to modify the algorithm to reflect this (as in Algorithm 1).
- Line 142 of the Theoretical section introduces the noisy inverse problem as $y = A x_0 + \sigma_y n$. However, it should be noted that exact recovery is not possible in this case and it seems that the theoretical results are proven under the assumption of $\sigma_y = 0$, where the problem can be solved exactly. The authors should explicitly address these points to ensure clarity.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer vbWT
Dear Reviewer vbWT,
Thank you for highlighting the fact that our study proposes the **first method** for inverse problems with latent diffusion models. We also thank you for your comment that the **theoretical results are valuable and noteworthy** and the **empirical results are promising**.
Below, we respond to your remaining comments and questions.
(1) **The main weakness is that all experiments are conducted on subsets of FFHQ and images from the internet.** To make a fair comparison with the baselines, we use the same experimental setting and evaluate on the same subsets of FFHQ as the baselines. In addition, we show improved performance on out-of-distribution images from the internet. We view this as a strength because we use the same generative model for both the datasets, whereas prior works (including DPS) require dataset specific generative models.
(2) **We only show marginal improvement of PSLD (LDM-VQ-4) over DPS in Table 3.**
Our goal is to develop a framework to leverage the power of pretrained latent models, such as Stable Diffusion. In this table, our focus is not to maximize the margin that we beat the previous state-of-the-art DPS. The fact that we can still obtain results comparable to (or better than) the state-of-the-art DPS using an LDM (see Table 3) indicates that we do not loose a lot of information by shifting the diffusion process to the lower-dimensional latent space. Latent diffusions are much faster, and also our method
unlocks the potential of using pre-trained LDMs as pointed out by Reviewer yVXi. Besides, Table 2 (also the newly added results) shows that PSLD can be better than DPS by simply scaling these LDMs, which is a common practice adopted by commercial service providers.
(3) **Is our Improvement due to better algorithm or better generative models due to larger training dataset?**
We would like to clarify that the significant gain of PSLD over DPS (both in-distribution and out-of-distribution) is partly due to the fact that Stable Diffusion has been trained on a significantly larger dataset. However, it is important to note that none of the existing posterior sampling algorithms could leverage this pre-trained latent diffusion model for inverse problems. Another important aspect is the ability of PSLD to use the same generative model for several downstream tasks on FFHQ, ImageNet, and random images sourced from the web. On the contrary, prior works (including DPS) require dataset specific generative models, which limits their application to general domain images.
(4) **Is our improvement attributable to gluing term, compared to the DPS vanilla extension?**
As we discuss in Section 2.1, the vanilla extension of DPS fails due to many-to-one mapping of the encoder. Still, we believe this is not a fair comparison with DPS because the authors originally built DPS for a pixel-space diffusion model (not a latent one). Making DPS approximation work in the latent-space requires non-trivial extensions, which we discuss in Section 2.1, Section 3.2 (**Theorem 3.4**), and Section 3.3 (**Theorem 3.7 and 3.8**). We also provide experimental results in Appendix B.2 (Table 5) and newly added results on super-resolution and Gaussian deblur tasks (please see the attached pdf).
(5) **One may hypothesize that including gluing term in DPS may enhance its performance.**
Naively gluing in pixel-space will create visible edges separating inpainted pixels from the observed ones. Still, the gluing term in pixel-space DPS will provide guidance through an extra gradient term $\nabla_{\mathbf{x}_i}||A^T\mathbf{y} - \hat{\mathbf{x}}_0||_2^2$. This is an interesting direction for future research, but not within the scope of our paper since our goal is to solve inverse problems using latent diffusion models.
(6) **Typos and questions.** We will correct all the typos in the revised version, thank you for your careful reading.
For equation (3), it should be $\hat{x}_0 = \mathbb{E}[x_0|x_t]$. In Algorithm 1: $\mathcal{T}$ in line 1 should be the standard normal distribution $\mathcal{N}$. Since our algorithm only requires $\mathbf{y}$, we will modify the **Input** in Algorithm 2 similar to DPS as suggested by the reviewer. **(Line 142)** As correctly pointed out by the reviewer, exact recovery is possible when $\sigma_y=0$ (the setting we consider), but not when $\sigma_y>0$. We will clarify this in the revision.
### Concluding Remark
Please let us know if the clarifications and additions suitably address your concerns. We are happy to address any remaining points during the discussion phase.
---
Rebuttal Comment 1.1:
Comment: I appreciate the thorough response provided by the authors. I believe that this paper constitutes a valuable contribution and increased my score.
However, I would like to discuss the phrasing and structure of the experiments in Section 5. I find it slightly misleading, which not only led me astray but possibly other reviewers as well. I understand the enthusiasm surrounding the outcomes achieved with the stable diffusion model. However, I believe it would be more equitable for the readers if the authors first presented and discussed the results for PSLD on the latent diffusion model LDM-V Q-4. Comparing these outcomes with those of DPS for a standard diffusion model makes sense, as these two models are trained using the same volume of data. This approach would effectively establish the authors' assertion that the primary innovation lies in the proposed algorithm for latent diffusion, which can perform comparably well, if not slightly better, than the state-of-the-art DPS for standard diffusion models. Following these initial experiments, presenting the results for the Stable diffusion model would be logical and would further reinforce the authors' argument.
Finally, I am assuming that the authors have tested equation (5), subsequently moving on to (6), before eventually arriving at the proposed (7). I am curious whether the authors have conducted experiments to demonstrate that using (6) indeed outperforms the basic (5), and also to showcase that adopting (7) results in improvements over (6). It is possible that this information is present in the appendices, but I may have overlooked it.
---
Reply to Comment 1.1.1:
Title: Discussion with Reviewer vbWT
Comment: Dear Reviewer vbWT,
Thank you for reading our response and increasing the score. We greatly appreciate your timely feedback and active participation during the discussion phase. Below, we respond to your new comments.
(1) **Discussion on the phrasing and structure of the experiments in Section 5.** We thank the reviewer for the suggestion on how to make the flow of the experimental Section 5 more logical and further reinforce our arguments. As suggested by the reviewer, we will rephrase and restructure Section 5 in the revised version of the paper.
(2) **Have the authors tested equation (5), (6) and (7)?** We have tested equation (5), (6) and (7) leading up to the main idea of PSLD. In our inpainting experiment, we found that equation (5) failed to generate coherent images at the boundary, equation (6) made the boundary smooth at the cost of extensive parameter tuning, and finally equation (7) mitigated these issues that resulted in high PSNR and SSIM. Quantitatively, we have already demonstrated the improvement of equation (7) over equation (6) in the Appendix B.2 (Table 5). Also, the newly added experiments in the attached PDF (Table 1) shows this improvement of equation (7) over equation (6). In the revised version, we will add experiments with equation (5) as another baseline.
### Concluding Remark
We are happy to address any remaining points during the discussion phase. | Rebuttal 1:
Rebuttal: ### Response to all reviewers
Dear Reviewers,
We thank you for carefully reading our paper and providing us with valuable feedback. Below, we summarize the reviews and newly added experiments to substantiate our contributions.
(1) We are encouraged by the **unanimous comment** by all reviewers that our study proposes the **first method for solving inverse problems with LDMs**, which **unlocks** the potential of large pre-trained LDMs, such as Stable Diffusion.
(2) We thank Reviewer vbWT for highlighting the fact that the **theoretical result is valuable and noteworthy**, considering the field's scarcity of rigorous results, even in toy examples. We also thank all the reviewers for pointing out the fact that our **experimental results are promising**.
(3) Regarding constructive feedback, we have **favorably addressed all the questions** raised by the reviewers. In particular, we thank Reviewer vbWT, yVXi and xKok for suggesting relevant experiments with quantifiable metrics, which helped **strengthen our contributions**. We have added the following experimental results in the attached PDF:
(a) Quantitative results on Super-resolution (4X): Table 1
(b) Quantitative results on Gaussian Deblur: Table 1
(c) Qualitative results on colorization: Figure 1
(d) Quantitative comparison of runtimes of different algorithms: Table 2
(e) Quantitative comparison of NFEs of different algorithms: Table 2
(f) Quantitative comparison of unconditional generation performance: Table 2
(g) Overall pipeline of our proposed framework for arbitrary mask: Figure 2
(4) We thank Reviewer xKok for bringing related works to our attention. We have compared with these methods and cited accordingly in the revised version.
### Concluding Remark
Please let us know if the clarifications and additions suitably address your concerns. We are happy to address any remaining points during the discussion phase.
### Reference
[1] Song, Jiaming, Arash Vahdat, Morteza Mardani, and Jan Kautz. Pseudoinverse-guided diffusion models for inverse problems. In: International Conference on Learning Representations. 2023. url: https://openreview.net/forum?id=9_gsMA8MRKQ.
[2] Yaniv Romano, Michael Elad, and Peyman Milanfar. The little engine that could: Regularization by
denoising (RED). arXiv preprint arXiv:1611.02862, November 2016.
[3] Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D. and Norouzi, M., 2022, July. Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 Conference Proceedings (pp. 1-10).
[4] Bahjat Kawar, Gregory Vaksman, and Michael Elad. SNIPS: Solving noisy inverse problems
stochastically. arXiv preprint arXiv:2105.14951, May 2021.
[5] Yinhuai Wang, Jiwen Yu, and Jian Zhang. Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model. In: The Eleventh International Conference on Learning Representations. 2023. url: https://openreview.net/forum?id=mRieQgMtNTQ.
[6] Meng, X. and Kabashima, Y., 2022. Diffusion model based posterior sampling for noisy linear inverse problems. arXiv preprint arXiv:2211.12343.
Pdf: /pdf/dde9427cd105dc9740108c11746f1756c6c3de5b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise | Accept (poster) | Summary: The paper presents a general method for learning to de-corrupt datapoints in order to perform generative modelling. The paper proposes a sampling method that applies iterative updates to the current state that significantly outperforms a naive sampling algorithm which reconstructs and denoises alternatively. The authors present a wide variety of corruption processes and outperform a single step model and the naive algorithm.
Strengths: Extending the idea of creating a generative model from corrupting data and learning to reconstruct it to a more general framework is a problem I believe many in this field are interested in. Not least for me personally, I have been interested in this problem and their sampling algorithm would have been useful for me to know when investigating this topic. This paper takes a very general approach making very few assumptions on the corruption process and makes a non-trivial contribution by coming up with a sampling algorithm that can perform iterative updates to generate new data which is found to be crucial for this type of method to work well.
The paper and methodology have flaws as I discuss next but I do believe that researchers will build on this work and find this contribution useful. The line of work is more engineering focussed and less based on theory which is not necessarily wrong and I think the novelty and likely interest in the topic slightly outweighs the negatives in this case.
Weaknesses: Starting out, the performance of the method is just not that good in terms of sample quality. I appreciate that getting methods to work well takes development over the course of multiple papers but I worry that this is a limitation of the method itself due to most corruptions tested not working well compared to standard diffusion models.
When using this framework as a generative model there appear to be issues when the deterministic corruption transformation is not 1-1 due to the corrupted space being much smaller e.g. the space of images with constant colour or completely blurred images. The authors report problems with reduced diversity because of this and need to add a little bit of noise to increase diversity. I think this is quite a major flaw in the design because unlike standard diffusion models that can rely on maximum likelihood arguments to enable coverage of the data distribution, there are no such guarantees here and it is not clear how much the noise trick alleviates the issue. This flaw seems quite fundamental to some corruption process and a proper investigation into the diversity of samples would be good. Its a little strange how in effect all of the data distribution is being squeezed into this small subspace and then perturbing around the subspace a little bit induces diversity, this seems to be quite ill-conditioned in how small changes in input make large changes in generated output.
The claims about new questions being raised as to the necessity of noise in generation should be reigned in because of the popularity and performance of flow matching type methods that are deterministic during training (given a randomly sampled pair) and sampling. The links to these methods should also be discussed especially in the case of corruptions such as animorphosis which is quite similar to a flow between two arbitrary data distributions and building a interpolating bridge between two random samples from them.
The sections on deblurring and inpainting should be better introduced as the main paper talks about generative models and so this section comes as a bit of a surprise when I would have expected pure unconditional generation as the initial experiment.
Typos:
equation after line 145, need .e on final line
proof A.9, need i = t-1 on third line of E_t^2. How do you move the sum out of the norm on the final line ?
Edit after rebuttal: I have read the author's rebuttal and as I mention in my reply, my points regarding the ill-conditioned nature of the generative process and poor performance are still concerning for me and I intend to leave my score as it is.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: The inpainting results seem so much better than other results in terms of sample quality. When you say Figure 4/Figure 10 is showing 'test images' do you actually mean that these were held out during training or has the network seen those during training. This should be made very clear since it seems that the network has just memorized those images (at least on celebA).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Discussed in weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > the performance of the method is just not that good in terms of sample quality.
We acknowledge that the quality of images generated in Section 5 is not comparable to that of Gaussian noise as a degradation. This work is not meant to be a SOTA method paper, instead we challenge both theory frameworks where noise is a critical component for learning the training distribution and the engineering status-quo that has not made explorations beyond Gaussian noise. We believe our experiments effectively demonstrate that entirely noise-free schemes (blur, inpainting, super-resolution) can still work.
> The sections on deblurring and inpainting should be better introduced as the main paper talks about generative models and so this section comes as a bit of a surprise when I would have expected pure unconditional generation as the initial experiment.
We thank the reviewer for their valuable suggestion, on our end we introduced conditional generation in Section 4 before unconditional generation in Section 5 because we wanted to present the conditional generation as a first step as it occurred in our own project. If one fails to perform conditional generation, then one will fail on unconditional generation as well which is a more difficult problem.
> Typos: equation after line 145, need .e on final line proof A.9, need i = t-1 on third line of E_t^2. How do you move the sum out of the norm on the final line ?
We thank the reviewer for pointing us to these typos which we have now fixed in the current revision.
In the toy problem discussed in the Appendix A.9, we consider a blur operator that removes one frequency from image every time $t$ increase by one. Hence, given a random sample X, the vevtors $x_j$ and $x_k$ for $j \neq k$ in the expansion $X = \sum_{i=0}^{T} x_i$ are orthogonal to each other. \
Thus we have \
$E_t ^ 2 = ||\sum_{i=t-1}^{T} (x_i - \hat x_i) || ^ 2$ \
$E_t ^ 2 = \sum_{i=t-1}^{T} || (x_i - \hat x_i) || ^ 2 + \mathop{\sum\sum}_{j\neq k} (x_j - \hat x_j)^T(x_k - \hat x_k)$
Since both $X$ and $\hat X$ have the same orthogonal basis, i.e. the fourier directions, we have terms $x_j^Tx_k$, $x_j^T\hat x_k$, $x_k^Tx_j$ and $x_k^T\hat x_j$ to be equal to zero. This results in \
$E_t ^ 2 = \sum_{i=t-1}^{T} || (x_i - \hat x_i) || ^ 2$
> When you say Figure 4/Figure 10 is showing 'test images' do you actually mean that these were held out during training or has the network seen those during training.
In all of our experiments, we evaluate our methods on the held-out testing dataset. We specify the evaluation details in lines 169-173.
Thank you again for your thoughtful review. We made an effort to address your feedback including paper edits and would appreciate it if you would consider raising your score in light of our response. Do you have any additional questions we can address?
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, I appreciate your answers to the questions. My points regarding the ill-conditioned nature of the generative process and poor performance are still concerning for me and I intend to leave my score as it is. | Summary: This paper extends the Gaussian diffusion model toward arbitrary image-to-image translations, named Cold Diffusion. Specifically, the authors define a generalized forward diffusion process and its training process, then propose a novel Transformation Agnostic Cold Sampling (TACoS) process for generations. Experiments show that Cold Diffusion can effectively achieve image generation by learning image-to-image translation.
Strengths: This paper proposed a novel idea that the diffusion process can be applied to arbitrary image-to-image translations, not limited to Gaussian noise. The ideas proposed in this paper have achieved a certain impact in the field of diffusion models and inspired a lot of work. In light of this, I recommend that the paper be accepted.
Weaknesses: The author only provides an empirical formulation, without rigorous theoretical analysis. Nonetheless, this cannot overshadow the novelty of this work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is it possible to apply the diffusion process to any domain translation, i.e., dimension agnostic and modality agnostic?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: 1. It seems that the image-to-image translation needs to have invariant dimensions.
2. It will be interesting to have a mathematical analysis of Cold Diffusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your careful review and positive assessment of our work. Your recognition of the novel approach and potential impact in the field of diffusion models is highly encouraging. Below, we address the points you raised:
> Is it possible to apply the diffusion process to any domain translation, i.e., dimension agnostic and modality agnostic?
Yes, we believe our proposed algorithm is modality agnostic and can be adapted to different modalities like speech, text, etc. though diffusion modeling as a method is difficult to apply in variable-dimension settings. For example, text can occur in various lengths and diffusion models whose corruption process can result in text of different lengths present difficulties. Solving this problem with Cold Diffusion may or may not be more difficult.
---
Rebuttal 2:
Comment: Thanks for your response. I will keep my rating. | Summary: This work introduces a novel approach called cold diffusion, in which both the forward and backward processes are deterministic. The authors propose a scheme called Tacos, which predicts x_{s-1} from x_s by leveraging the estimated increment D(\hat{x}_0, s) - D(\hat{x}_0, s-1).
Strengths: The autors propsed nontrivial generalization of diffusion generative model to simple and straightforward deterministic process.
Weaknesses: 1. The justification of TACos (Section 3.3) appears weak.
- Higher-order terms may have a significant impact that is not adequately addressed.
- TACoS is likely to fail in standard Gaussian diffusion scenarios.
2. As mentioned in Section 5.2, the generated samples exhibit low diversity.
This indicates a failure to accurately recover the sample distribution, which is a primary objective of diffusion generative models.
3. Beyond the issue of diversity, the quality of the generated samples is also questionable.
In the appendix, the 128x128 generated samples demonstrate significantly poorer quality compared to regular generative models.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The output quality of other cold generations is questionable. For instance, in the case of super-resolution, starting from a 2x2 image for the backward process may lead to similar issues of low diversity in the generated samples.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: This reviewer appreciates the authors' introduction of a novel deterministic generative diffusion framework. However, it is important for the proposed framework to demonstrate comparability to reasonable GAN models in terms of diversity and output quality. The inclusion of Gaussian noise (as seen in predictor-corrector models or Langevin dynamics) or a similar randomization process is typically considered necessary and serves as a key component in diffusion processes. To justify the diffusion without noise, it is crucial for the authors to provide results that are at least comparable to decent GAN models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and for recognizing the nontrivial generalization of our work. We address each of your points below:
> Higher-order terms may have a significant impact that is not adequately addressed.
In lines 156-160, we mention that the analysis in Section 3.3 is not a complete convergence theory but rather is to highlight a desirable theoretical property of TACoS that a naive sampler lacks. In the analysis presented in lines 138-148, we assume t to be approximately 0. This implies that the difference between $D(x, t=0)$ and $D(x, t=\epsilon)$ is sufficiently small to ignore higher-order terms, compared to the first order terms. \
This analysis is limited to small $t$ and raises the question if this advantage of Algorithm 2 (TACoS) over the naive Algorithm 1 extends to region when t is not close to 0. To this end, we prove in Appendix A.9 that for a toy problem in which the blur operator removes one frequency at a time, the error incurred by Algorithm 1 is higher than Algorithm 2 for all $t$.
> TACoS is likely to fail in standard Gaussian diffusion scenarios.
We appreciate your concern, but as we discuss in detail in Section 5.1 and Appendix A.6, DDIM is a special case of the proposed TACoS algorithm when applied to Gaussian noise. This means that both Algorithm 1 and Algorithm 2 are equivalent to DDIM in the case of Gaussian diffusion. We explicitly verify this in Table 4, where we train a hot diffusion model, i.e. we use Gaussian noise as degradation. As standard diffusion is recovered as a special case, this indeed works well and TACoS does not fail for standard Gaussian diffusion scenarios.
> The output quality of other cold generations is questionable. For instance, in the case of super-resolution, starting from a 2x2 image for the backward process may lead to similar issues of low diversity in the generated samples.
In our experiments, we observed that the *starting point*, whether a 2x2 image is used for down-sampling based degradation in Table 3 or a 1x1 image in the case of cold diffusion in Table 5, has a distinct impact on the final results because of the nature of degradation involved. Your observation about the potential issues of low diversity in the generated samples starting from a 2x2 image is insightful. However, we found that the underlying cause of sub-par performance is related to the number of steps, not necessarily the diversity in the 2x2 region.
Specifically:
* In the case of blur-based cold generations (Table 4), perfect symmetry at the starting point leads to FID scores similar to those obtained when starting from a 2x2 image using downsampling as degradation (Table 3).
* The key difference between these results lies in the sampling steps. For cold generation, we utilized 300 steps to generate 128x128 CelebA images, while in the case of downsampling-based degradation, we use just 6 steps.
* Consequently, we believe that the limitation in performance is attributable to the fewer number of steps rather than less diversity in the 2x2 region.
* Moreover, for downsampling-based degradation, our results demonstrate that starting from 2x2 images produces more diverse samples compared to 1x1 images for the CelebA dataset.
> This reviewer appreciates the authors' introduction of a novel deterministic generative diffusion framework. However, it is important for the proposed framework to demonstrate comparability to reasonable GAN models in terms of diversity and output quality. The inclusion of Gaussian noise (as seen in predictor-corrector models or Langevin dynamics) or a similar randomization process is typically considered necessary and serves as a key component in diffusion processes. To justify the diffusion without noise, it is crucial for the authors to provide results that are at least comparable to decent GAN models.
We thank the reviewer for their appreciation of our work on introducing a novel deterministic generative diffusion framework. In Table 4, where we compare cold diffusion to "hot diffusion" (which uses Gaussian noise as degradation), the use of estimated noise in TACoS sampling makes it equivalent to DDIM [1], a deterministic sampling method, which is in fact better than sampling methods that use noise like DDPM [2]. Hence on our end, we compare our cold diffusion with a standard and widely accepted generative model. \
Moreover, this is not meant to be a SOTA method paper (we have updated our local draft to clarify). Instead, this work challenges both theory frameworks where noise is a **critical** component for learning the training distribution and the engineering status-quo that has explored little beyond Gaussian noise. We believe our experiments effectively demonstrate that entirely noise-free schemes (blur, inpainting, super-resolution) can still work. This will not only be of theoretical interest, but it may move the community to explore other kinds of diffusion for common tasks like upsampling, and with further research investments such methods may someday become standard tools.
[1] Denoising Diffusion Implicit Models \
[2] Denoising Diffusion Probabilistic Models
Thank you again for your thoughtful review. We made an effort to address your feedback including paper edits and would appreciate it if you would consider raising your score in light of our response. Do you have any additional questions we can address?
---
Rebuttal Comment 1.1:
Title: Clarification on my previous comment regarding Gaussian case
Comment: I continue to harbor doubts regarding its applicability to the Gaussian scenario, especially in the Variance Exploding setup defined by $x_t = x + W_t$ where $W_t$ is a standard Brownian motion characterized by $W_t\sim \mathcal{N}(0, t)$.
For this case, Algorithm 2 fundamentally operates by subtracting noise: $x_{s-1} = x_s - Z$ where $Z = N_s-N_{s-1}$ is essentially a Gaussian noise. Given this setup, I'm skeptical about the algorithm's ability to produce accurate samples.
---
Reply to Comment 1.1.1:
Title: Clarification on Variance Exploding setup for Gaussian Noise
Comment: Thank you for engaging with us in the discussion. We would like to clear that Algorithm 2 will work even for Variance Exploding setup in this discussion.
For the case of VE setup we have
$x_t = x_{t-1} + \sqrt{\sigma_t^2 - \sigma_{t-1}^2}\epsilon$
where $\epsilon$ is sampled from $N(0, I)$ and $\sigma_t$ is such that it increases with time $t$. We use this nomenclature of VE from equation 20 of SBM [1] present in Appendix B, which shows that it results in VE SDE.
This VE in discrete form can be further simplified as
$x_t = x_0 + \sigma_t \epsilon$
Hence, the degradation model $D(x_0, t)$ gives $x_t = x_0 + \sigma_t \epsilon$ and the reconstruction operation $R(x_t, t)$ predicts the clean image $\hat x_0$. Thus the Algorithm 2 will give us
$x_{t-1} = x_t - D(\hat x_0, t) + D(\hat x_0, t-1)$
As discussed in section 5.1 if one uses the *estimated noise* in degradation which in this case is $\hat \epsilon = \frac{x_t - \hat x_0}{\sigma_t}$, we have
$D(\hat x_0, t) = \hat x_0 + \sigma_t \hat \epsilon$ \
$D(\hat x_0, t) = \hat x_0 + \sigma_t \frac{x_t - \hat x_0}{\sigma_t}$ \
$D(\hat x_0, t) = \hat x_0 + x_t - \hat x_0$ \
$D(\hat x_0, t) = x_t$
Thus this simplifies, the sampling in proposed algorithm 2 as \
$x_{t-1} = x_t - x_t + D(\hat x_0, t-1)$ \
$x_{t-1} = D(\hat x_0, t-1)$
This $D(\hat x_0, t-1)$ can be simplified as \
$D(\hat x_0, t-1)= \hat x_0 + \sigma_{t-1} \hat \epsilon$
Hence we get \
$x_{t-1} = \hat x_0 + \sigma_{t-1} \hat \epsilon$ from our proposed algorithm 2, which in fact is the deterministic sampling algorithm for VE setup.
Thank you again for your thoughtful response. We made an effort to address your feedback including paper edits and would appreciate it if you would consider raising your score in light of our response. Do you have any additional questions we can address? | Summary: This paper introduces a method for image generation based on generic degradation and reconstruction operators. The approach generalizes diffusion models, which correspond to degradation by additive Gaussian noise, and reconstruction by denoising. In TACoS, the sampling scheme is agnostic to the choice of image degradation, and the corresponding restoration operator is learned via least squares regression over the data.
The authors additionally introduce a sampling iteration with a correction term (Algorithm 2) that induces first order cancellation of errors induced by improper learning of the reconstruction operator. The correction term is shown to greatly improve performance over a naive approach, since it prevents blow-up of fitting error over multiple iterations. The authors also prove that in a toy problem (degradation via frequency filtering) that Algorithm 2 has smaller reconstruction error than the naive approach.
Finally, the authors demonstrate that TACoS can be used for sampling and reconstruction with a variety of degradation operators, such as deblurring, inpainting, and superresolution. First, they show that in these cases, solving the reconstruction problem associated with each degradation is feasible for large-scale image datasets such as CIFAR-10 and CelebA. Then, they show that the blur transformation can be used to sample CelebA and AFHQ images, albeit with significantly reduced sample quality and diversity. They finally show as a proof of concept that other transformations such as inpainting, super-resolution, and animorphosis, can also be used to generate samples.
Strengths: - Clarity: the paper is well written and very clear. To the best of my knowledge the derivations are correct.
- Novelty: the idea behind this paper is interesting and novel to the best of my knowledge. However, as I will discuss in below, the practical value of this idea is unclear.
- Low computational cost: the proposed method is simple and computationally chip, given that it only requires fitting one regression model over the dataset.
- Experimental methodology: the experiments in this paper clearly demonstrate that TACoS can be used for sample generation under a variety of datasets and image transforms. It is interesting that the method can be used to generate samples with arbitrary degradation and reconstruction operators, as opposed to Gaussian noising and denoising via score-matching.
Weaknesses: - Unclear practical value: in my opinion, the practical value of this paper is unclear because the proposed algorithm appears to have low sample quality and diversity. There are many existing methods for deterministic iterative sampling algorithms, notably flow based methods like Continuous Normalizing Flows [1] and Probability Flow ODE [2], which can both eliminate the need for sampling noise and attain high sample quality.
- Unclear applications: it is interesting that the proposed method can use arbitrary image transformations, which may pave the way for other applications beyond sample generation. However, it is currently unclear what these applications may be, which further limits the value of this approach.
- Lack of baselines: the sample generation experiments in Table 4 should also include baseline values, representing the FID that can be attained by existing methods such as vanilla DDPM or GAN based methods.
[1] Building Normalizing Flows with Stochastic Interpolants (Albergo and Vanden-Eijnden, 2023)
[2] Score-Based Generative Modeling through Stochastic Differential Equations (Song et al., 2021)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What are some potential applications (beyond sampling) for TACoS with non-standard degradation operators like animorphosis?
Note: it's a bit confusing to report RMSE in Tables 1-3 but to then discuss the PSNR in the text. It would be helpful stick to one throughout the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations and potential negative societal impacts of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback. We address each of your points below.
> Unclear practical value and unclear applications
We agree with the reviewer that cold diffusion does not outcompete the much more highly engineered and compute intensive state-of-the-art Gaussian diffusion models. We also agree that multiple works like [1], [2], [3], [4] remove the need to use random gaussian noise during sampling (i.e. at inference time). Moreover, works like [5] and [6] show that degradations other than gaussian noise are useful as diffusion processes.
This work is not meant to be a SOTA method paper, and we have updated our working draft to clarify this. Instead, this work challenges both theory frameworks where noise is a **critical** component for learning the training distribution and the engineering status-quo that has not been explored beyond Gaussian noise or other stochastic degradations. We believe our experiments effectively demonstrate the surprising fact that entirely noise-free schemes (blur, inpainting, super-resolution) can still work.
This observation is not only of theoretical interest, but it may move the community to explore other kinds of diffusion for common tasks like upsampling, and with further research investments such methods may someday become standard tools. Furthermore, by exploring deterministic degradations beyond Gaussian noise, we open up the possibility of finding the best degradation that will work most effectively for image generation. This represents a significant departure from traditional diffusion models and offers a new pathway for research and experimentation in the field. By expanding the horizons of what's considered in terms of degradation, we may unlock new avenues and insights that can contribute to advancing the state of generative models.
[1] Building Normalizing Flows with Stochastic Interpolants \
[2] Score-Based Generative Modeling through Stochastic Differential Equations \
[3] Denoising Diffusion Implicit Models \
[4] Elucidating the Design Space of Diffusion-Based Generative Models \
[5] Structured Denoising Diffusion Models in Discrete State-Spaces \
[6] DiffusionDet: Diffusion Model for Object Detection
> Lack of baselines: the sample generation experiments in Table 4 should also include baseline values, representing the FID that can be attained by existing methods such as vanilla DDPM or GAN based methods.
In Table 4 of our paper, we compare our "cold diffusion" which uses a deterministic blur degradation to a noise-based degradation, which we call "hot diffusion", this really is a vanilla diffusion model. The noise-based degradation uses Gaussian noise and the sampling provided in Algorithm 2. Though it may appear different from existing methods like DDPM or DDIM, as discussed in section 5.1, the sampling method underlying the proposed TACoS is equivalent to DDIM. We present the fact that DDIM is a special case of our Algorithm 2 for the case of Gaussian noise in Appendix A.6. Our revised manuscript now clarifies this.
>What are some potential applications (beyond sampling) for TACoS with non-standard degradation operators like animorphosis?
We present results for various non-standard degradations like animorphosis or snow to show that our proposed sampling algorithm is agnostic to any degradation and is not designed for one specific degradation. Another use of animorphosis can be in building flows between any two arbitrary distributions.
>Note: it's a bit confusing to report RMSE in Tables 1-3 but to then discuss the PSNR in the text. It would be helpful stick to one throughout the paper.
We thank the reviewer for bringing this confusion to our notice. We have now revised our text and use RMSE everywhere, aligning the metrics throughout the paper for consistency. We'll include these edits in our camera ready version.
Thank you again for your thoughtful review. We made an effort to address your feedback including paper edits and would appreciate it if you would consider raising your score in light of our response. Do you have any additional questions we can address?
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you answering my questions and for clarifying the equivalence between hot diffusion and DDIM. I appreciate the authors' efforts to discuss the merits of this paper with me. I feel that my understanding of the contributions and applications has not changed, so I do not plan to edit my score. | Rebuttal 1:
Rebuttal: We thank all of our reviewers for their thoughtful comments. Based on all the suggestions, we have updated our draft and we would like to highlight a few central contributions of our work.
1. In this paper, we aim to challenge the common belief that Gaussian noise in any form, either during training or sampling is **necessary** for diffusion models to work. We believe this will not only be of theoretical interest, but it will move the community to investigate other components that could improve diffusion model such as model architecture or training setup.
2. We demonstrate the above point by using different types of **deterministic** degradations. We are enthusiastic readers of other papers that use alternative noise-based degradations like gamma or salt-and-pepper noise, as mentioned, but whether alternative noise-based degradations are also possible is not a question we are investigating in this work.
3. We noticed that where a few questions regarding baselines. We want to highlight that the presented hot diffusion is in fact DDIM. We present the equivalence of our proposed algorithm TACoS in section 5.1 and Appendix A.6. We have clarified this in our revision.
Overall, we believe our findings that cold diffusions can perform high-quality generation is surprising and provides a key insight in the burgeoning study of diffusion models as a whole. Hence, we believe it would be valuable for the community for this work to appear at NeurIPS. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DreamWaltz: Make a Scene with Complex 3D Animatable Avatars | Accept (poster) | Summary: The paper presents an approach for creating animatable avatars from text prompts. It builds on DreamFusion and makes it articulated by incorporating articulated NeRF and SMPL body model. It also replaces the vanilla text-to-image model (StableDiffusion) with ControlNet to introduce 3D consistent SDS loss. The performance is evaluated using a user study where the proposed method is shown to outperform existing methods.
Strengths: - The paper addresses the challenging problem of creating neural avatars from text prompts.
- The paper demonstrates that DreamFusion can be extended to articulated humans.
- The proposed method is intuitive and makes sense, though it is not entirely novel.
Weaknesses: ### Novelty
- The main limitation of the paper is the lack of novelty. The paper builds on DreamFusion and then adopts existing methods for articulated NeRF. It replaces the vanilla stableDiffussion model with ControlNet for 3D consistent SDS loss. Overall, the proposed approach is a straightforward combination of existing methods and it is hard for me to identify any novel technical contribution of the paper.
### Baselines
- A simple baseline is missing. Similar to AvatarClip, we can extract a static mesh using MarchingCube and then rig it with SMPL. This will make it animatable. Since the paper's main contribution is animatable avatars, I believe having this baseline is important to properly validate the contributions of the paper.
### Visual quality
- The quality of the generated avatars is also limited even though they are all well-known characters.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ### Implementation Details:
- Would be nice to provide further implementation details. What is the batch size, learning rate, guidance value, etc.?
- How is the background learned during training, what are light settings, etc?
### Ablation Studies:
- What is the impact of the density-weighted network? Existing methods for neural avatars (AniSDF, HumanNeRF, etc.) use different variants to accommodate non-rigid deformations. How does the performance compare with those?
- What happens if random body poses from humans prior are not used during training? Would the DWN module still work? Since it operates in the canonical space, why random body poses are required in the first place?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper does not discuss the limitations of the proposed approach. Overall, the visual quality of the generated avatars is quite limited and there are severe artifacts in the animations (e.g., on the arms of Woody in support). The time required to generate an avatar is also a limitation of the proposed method and should be discussed. The qualitative results are mostly provided for the cartoonish characters which are well known. It would have been nice to see more creative avatars e.g., a doctor with a Woody's Hat, etc.
Flag For Ethics Review: ['Ethics review needed: Compliance (e.g., GDPR, copyright, license, terms of use)']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed feedback and questions! Below we address the questions and concerns separately.
### **Q: The paper is a straightforward combination of existing methods and lacks novelty.**
**A:** Our work addresses two important problems in text-dirven avatar generation:
**i)** SDS-based methods struggle to provide view-consistent supervision for avatar creation. Our work proposes 3D-consistent SDS with occlusion culling by utilizing SMPL and ControlNet, effectively increasing generation quality and avoiding multiple faces.
**ii)** Images contain rich information of human poses and interactions. We distill this information from diffusion model to avatar through arbitrary pose conditioned SDS, reducing the dependence on SMPL and allowing avatars with complex shapes to be animated.
Besides, compared to existing text-to-avatar works AvatarCLIP, AvatarCraft, and DreamAvatar, our work achieves the best visual quality and for the first time allows animation of complex avatars.
### **Q: A mesh-based animation baseline is missing.**
**A:** The solution based on mesh extraction and SMPL animation have two disadvantages.
**i)** There is a serious loss of fidelity in the conversion from the implicit field to mesh.
**ii)** SMPL only describes the naked human body and is not suitable for animating complex shapes, which is why the avatars created by AvatarCLIP are all SMPL-like shapes.
For comparison, we further provide animated results by mesh-based method and ours in Fig. 5 of the rebuttal pdf. Even with the professional tool Mixamo (requires manual rigging), the mesh-based animated results are still inferior to ours.
### **Q: The quality of the generated avatars is limited.**
**A:** Our results already outperform existing text-driven sota works with much faster generation speed (46% and 50% training time of AvatarCraft and DreamAvatar). Both reviewer kXf3 and DdUA gave positive comments on our experimental results, such as: “Experiments show SOTA performance for text-driven avatar generation and animation,” “The comparison with Sota is fair and complete to the best of my knowledge. The generated static avatars are in good quality.” The quality could be further improved via mesh refinement or zoom-in training but won’t be our contribution.
### **Q: Would be nice to provide further implementation details.**
**A:** We gently remind the reviewer that these implementation details are already provided in Sec. 3 of the supplementary material. Throughout the entire training process, we set the batch size to 1, set the classifier-free guidance scale to 50, and use the AdamW optimizer with a learning rate of 1e-3.
### **Q: How is the background learned during training, what are light settings?**
**A:** We use a two-layer MLP network as background NeRF to model the background which is common in DreamFusion implementations. We encode the ray direction using frequency encoding and utilize it as an input to the background NeRF, which predicts background color solely based on the ray direction. During training, the background NeRF is also optimized by SDS gradients, with the learning rate of 1e-3.
The lighting modeling remains as future work (also not yet considered in related work of DreamAvatar and AvatarCraft) since our work focuses on text-to-avatar generation.
### **Q: What is the impact of the density-weighted network?**
**A:** The density-weighted network mitigates animation artifacts due to inaccurate warping by setting the density of the points far from the target mesh surface to zero.
### **Q: How is the performance of the density-weighted network compared to different variants for non-rigid deformations?**
**A:** We did try various modules for non-rigid deformations, such as the non-rigid motion module from HumanNeRF, but found that the network failed to learn the correct deformations (severe artifacts appeared). We think the reason is that learning deformations from arbitrary pose-conditioned ControlNet is much more difficult than from videos.
### **Q: Why are random body poses required during training? Would the DWN module still work without random body pose?**
**A:** The DWN module will not work if random body poses are not used during training. The inputs of DWN involve the ray sampling point $p$ and its nearest neighbor vertex $v_c$, while the nearest vertex $v_c$ is related to current body pose. If the body pose is fixed, DWN will degenerate to a constant value, similar to the animation method of AvatarCraft which won’t be able to animate complex avatars.
### **Q: The paper has limitations in visual quality, time cost, and avatar creativity.**
**A:** Although not fully satisfactory, our method outperforms previous text-driven avatar creation works in terms of visual quality, time cost, and avatar creativity.
**a) Visual quality.** For static avatar creation, our results achieve sota visual quality compared to previous works on text-driven avatar creation. For animation, the severe artifacts of the Woody case are actually not common, and we provide more animation results in Fig. 2 and Fig. 5 of the rebuttal pdf.
**b) Time cost.** To generate a canonical avatar, our work takes about one hour on one NVIDIA A100 GPU, significantly outperforming existing text-driven avatar creation works. For comparison, AvatarCLIP takes 5 hours on one NVIDIA A100 GPU, AvatarCraft takes 2.2 hours on one NVIDIA A100 GPU, DreamAvatar takes about 2 hours on one NVIDIA RTX 2080Ti GPU.
**c) Creative characters.** We mainly provide cartoon characters for a fair comparison with previous text-driven avatar creation works. Many photorealistic avatars of celebrities are provided in Fig. 1 of the supplementary material pdf, including “Taylor Swift”, “Albert Einstein”, “Emma Stone”, “Linoel Messi”, etc. Following the reviewer's suggestion, we provide more creative avatars in Fig. 4 of the rebuttal pdf, such as “a doctor with Woody's Hat” and “Taylor Swift in Snow White costume”. | Summary: This paper proposes a new method for text-to-3D avatar generation. The proposed pipeline has two stages. The first stage generates a static avatar while the second stage learns the deformation properties of the avatar for animation. The authors propose 3D-consistent occlusion-aware score distillation sampling which seems to improve the generation quality over previous methods with standard score distillation sampling. The paper further includes the results of animation and interaction of the generated avatars.
Strengths: **Method:** The proposed 3D-consistent Score Distillation Sampling and occlusion culling is reasonable and technically sound. They seem to be effective in the provided ablation study.
**Experiment:** The comparison with Sota is fair and complete to the best of my knowledge. The generated static avatars are in good quality.
**Presentation:**. The paper is well-structured and written. I find it easy to follow and understand.
Weaknesses: 1. I have some doubts regarding the relationship to the existing work in Table 1. DreamAvatar is marked as non-animatable, however, the original DreamAvatar paper does show reposing results in Fig 4 in their paper. This seems to contradict the claim in this paper. And regarding the interaction between avatars/objects, shouldn't this be possible with all methods as it is basically composited volume rendering if I understand correctly?
3. In L34, the authors mention that realistic animation involves changing texture and shape in different poses. However, in the qualitative results in the videos, I did not see such pose-dependent changes but only articulation with LBS. It would be helpful to visualize the pose-dependent changes in the canonical space to understand if the model indeed learns the pose-dependent effects.
4. In video 00:00-00:01, on the leftmost sequence, Woody’s upper legs disappear when they are crossing. Why do such artifacts happen?
I am on the negative side currently. The main contribution of this paper w.r.t previous works seems to be making 3D text-to-avatars "animatable" and "interactive" for the first time (L49, table 1). However, regarding "animatable", previous work DreamAvatar does demonstrate avatars in new poses, and it's unclear why DreamAvatar is not "animatable". Also, the animation quality in this paper is still not satisfactory - there are artifacts (legs disappearing) and no obvious pose-dependent effects. Regarding "interactive", it seems that this is achieved simply by rendering two avatars in a composite way, which is straightforward and can be done with all previous methods. I hope the authors to clarify the contributions of this paper w.r.t previous methods.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Why is DreamAvatar not regarded as non-animatable?
2. How does the learned pose-dependent shape and texture change look in the canonical space?
3. What causes the disappearing leg artifacts in the video?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: There is no limitation discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful feedback and valuable questions! Below we address the questions and concerns separately.
### **Q: Why is DreamAvatar regarded as non-animatable?**
**A:** Although DreamAvatar can repose the avatar via **retraining**, it is impractical for animation due to the inefficiency (it takes two hours for a novel pose) and inconsistency (the appearance of the avatar in different poses may be changed, as shown in Fig. 1 of the rebuttal pdf). In fact, DreamAvatar doesn't claim to be able to animate avatars, nor does it provide animation results. For the above reasons, we believe DreamAvatar can be safely marked as not animatable.
### **Q: Interaction between avatars/objects should be possible with all methods as it is basically composited volume rendering.**
**A:** Regarding the interaction between avatars/objects, yes, naive interactions should be possible for all methods via composited volume rendering, and we will modify some ambiguous expressions such as Table 1 and Line 262. Different from existing works, we explore the 2D image prior (where social interactions are abundant) from diffusion model for improving scene interaction. Specifically, we find that scene-specific fine-tuning with our proposed 3D consistent SDS can further eliminate artifacts and make interactions more realistic, as shown in Fig. 5 in the supplementary material.
### **Q: How does the learned pose-dependent shape and texture change look in the canonical space?**
**A:** We provide a visualization of pose-dependent changes in Fig. 5 of the rebuttal pdf, where character Elsa's skirt and hair are changed with pose changes. Unfortunately, these changes cannot be displayed in canonical space because our animation operations are irreversible.
### **Q: What causes the disappearing leg artifacts in the video?**
**A:** The disappearing leg artifacts are caused by wrong predictions of the density weighting network, which makes the masking area around the legs part too large. Due to the instability of SDS optimization, the density weighting network is difficult to converge to an optimal state for the Woody case. But this situation is not common, most characters' upper legs don't disappear when they are crossing: e.g. Fig. 1 (a) of the main paper, Fig. 2 of the supplementary material.
### **Q: Contributions of this paper w.r.t previous methods.**
**A:** The main contribution of this paper is making text-driven 3D avatars “complex” and “animatable” for the first time. Previous text-to-avatar works AvatarCLIP and AvatarCraft are animatable, but the avatars generated by them are required to be highly similar to SMPL templates and oversimplified in shape. DreamAvatar is not animatable because it requires retraining for each pose control. For animation, despite rare artifacts, our method can animate various complex avatars without manual rigging and pose-dependent effects are provided in rebuttal pdf Fig. 5. For making scenes with interactions, our work not only renders two avatars in a composite way, but also proposes a scene-specific fine-tuning with our proposed 3D-consistent SDS.
### **Q: Missing Limitations.**
**A:** We have supplemented the discussion of limitations in global response.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the clarifications. However, I found the results still not strong enough to claim animatable and complex as the main contribution - the shape and animation quality are not yet satisfactory, and the artifacts in animation seem to be inherent to the method due to the instability.
---
Reply to Comment 1.1.1:
Comment: Thanks for the reviewer's valuable comment. Our results for complex static avatars are sota with better quality and shorter generation time than previous text-to-avatar works. For animation, although current SDS-based methods have inherent instability, using rich 2D image priors to learn 3D avatar animation is novel and worth exploring, avoiding that inverse LBS cannot be generalized to complex avatars. The Elsa example in the rebuttal pdf demonstrates pose-dependent changes learned from image priors, and we hope these results encourage more exploration of complex avatar animation. | Summary: The work proposes a method for generating 3D skeleton animatable characters by distilling a latent diffusion model. It uses Control Net to add additional key point map conditioning to the diffusion process, improving the granularity of pose control. It uses DreamFusion to distill a NeRF model given a text prompt and the skeleton conditioning in a fixed A-pose. The model is then refined by sampling additional poses from the SIMPL model used as a skeleton prior. The model is shown to be able to generate animatable models from diverse text prompts that does not require re-training for new novel poses.
Strengths: The method proposes a stable training regime, starting with retraining using a SIMPLE mask, followed by single pose and fine-tuning on multiples poses. The addition of depth culling also helps remove artifacts seen in prior methods.
Weaknesses: - The work's main claim of animatability needs further evidence. The results show animation only for non-complex characters; no characters with long skirts or hair are animated, only reconstructed. For the samples that are shown, the animation quality is not a big departure from LBS deformation. Furthermore, there is a lack of diversity in aspects of pose that are not included in the skeleton model, for example hands are blurring and biased to a specific pose.
- The ability of the method to capture interactions appears limited to blending in 3D, where complex interactions between parts are not modeled, presumably due to the independent training. For example, the hands are not clasped in the Waltz Fig. 3
- The generated models exhibit unnatural body proportions due to the mismatch between the skeleton conditioning and the text conditioning. For example, Michael Jordan (a basketball player) and Lionel Messi (a football player) seem to have the same body proportions in Fig 1. of the Supplementary.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - L22. what is the resolution of the model for rendering at 3s.
- L228. Does joint training affect generalization performance? A comparison would be useful here.
- How consistent is quality of results? Results showing reconstructions using different initial noise values would demonstrate diversity of generations.
- L272. Is inverse LBS just the proposed method without the additional MLP for d'? Clarification would be useful.
- Eqn (5). The motivation of this formula is not apparent from the text. Why would Sigmoid(d') not work similarly?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Authors have not addressed limitations or societal impact.
It is suggested that the authors outline weaknesses mentioned above and suggest remediation strategies as future work.
Social impact follows prior works such as DreamFusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors are grateful for the reviewer's valuable feedback and insightful questions. We are encouraged by your support for this work! Below we address the concerns separately.
### **Q: Further evidence of animatability is needed. No animation results of complex characters (with long skirts or hair) are shown.**
**A:** We show an animation result of Mulan (long hair) in Fig. 2 of the supplementary material. We further provide more animation results in Fig. 5 of the rebuttal pdf, animating Elsa with a long skirt and hair. As shown in Fig. 7(c) of our paper, our learnable animation method is significantly more effective than baseline methods Inverse-LBS and AvatarCraft. Intuitively, it makes sense that our animation method learned from diffusion image priors is better than the baseline methods that only rely on nearest neighbor vertex query and SMPL’s inverse-LBS, because SMPL only describes the naked human body and is not suitable for complex shapes.
### **Q: Lack of pose diversity in the skeleton model.**
**A:** It is true that the pose of the skeleton model lacks diversity and may cause blurring and artifacts. But by upgrading the skeleton model to SMPLX and ControlNet to v1.1, these shortcomings can be alleviated.
### **Q: The ability of the method to capture interactions appears limited to blending in 3D.**
**A:** In addition to independent training and blending in 3D, we also explore scene-specific fine-tuning, which puts multiple avatars in the same scene for joint training with the proposed 3D-consistent SDS loss (where image priors would “correct” unrealistic interactions), demonstrating improvements in visual quality as shown in Fig. 5 of the supplementary material pdf.
### **Q: Unnatural body proportions.**
**A:** The body proportions can be freely controlled via specifying SMPL model parameters. We provide results of using different SMPL shape parameters to control body proportions in Fig. 3 of the rebuttal pdf.
### **Q: What is the resolution of the model for rendering at 3s.**
**A:** For animation rendering, our 3D avatar representation renders 128x128 latents. Then, the latents are decoded by the VAE decoder of Stable-Diffusion to obtain 1024x1024 rgb images. The whole rendering and decoding process takes less than 3 seconds. The speed bottleneck is mainly in nearest neighbor vertex queries of ray sampling points for inverse-LBS. We use CPUs for these computations due to the huge memory requirement. Improving rendering speed remains for future work where we could consider writing faster GPU operators and more efficient nearest neighbor querying.
### **Q: Does joint training affect generalization performance?**
**A:** In fact, we provide a comparison of two-stage training (Stage I + II) and joint training (Stage II only) in Fig. 7 of the supplementary material pdf. The results show that joint training does hurt generalization performance, leading to more artifacts and “phantom limbs”. More discussions are given in sec 2.4 of the supplementary material.
### **Q: How consistent is the quality of results when using different initial noise values?**
**A:** We provide these results in Fig. 4 of the rebuttal pdf. The quality of avatars with different initial noises are consistently good.
### **Q: Is inverse LBS in L272 just the proposed method without the additional MLP for d'?**
**A:** Inverse-LBS in L272 is the proposed method without additional MLP for d’ and NeRF fine-tuning in Stage II. We will add this clarification in the revision.
### **Q: The motivation of Eqn (5) is missing. Why would Sigmoid(d') not work similarly?**
**A:** Sorry for the missing motivation of Eqn (5). We derive this formula from the mask function ${\eta(\boldsymbol{p})}$ in the AvatarCraft paper. This function aims to set the density of the points far from the target mesh surface to zero, formulated as:
$$
\eta(\boldsymbol{p})=
\begin{cases}
0, \text{if } d(\boldsymbol{p}) > \delta,\\\\
1, \text{if } d(\boldsymbol{p}) \le \delta,
\end{cases}
$$
where $d(\cdot)$ is the distance function between the ray sampling point $\boldsymbol{p}$ and its nearest neighbor vertex, $\delta$ is a constant threshold. Such a constant threshold is not suitable for avatars with complex shapes, so our work use the MLP-predicted value $d’$ (i.e., Eqn (4) in our paper) to replace the threshold $\delta$, resulting in:
$$
w_d:=\eta(\boldsymbol{p})=
\begin{cases}
0, \text{if } d(\boldsymbol{p}) - d' > 0,\\\\
1, \text{if } d(\boldsymbol{p}) - d' \le 0.
\end{cases}
$$
Finally, we introduce a sigmoid function with preset parameter $a$ to make it smooth and differentiable, obtaining Eqn (5):
$$ w_d = \text{Sigmoid}(-(d-d')/a). $$
An implementation of $ \text{Sigmoid}(d') $ might also work similarly but lacks intuition.
### **Q: Missing Limitations and Societal Impacts.**
**A:** We thank the reviewer for the valuable suggestions. We have provided the discussion of limitations and societal impacts in the global response.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and additional results.
Most of my questions are answered. I am still leaning positive, but the quality improvements using the proposed approach is not large enough to increase my rating further.
> The finetuning of interactions does appear to improve results, however, it seems to do so mainly with regards to aspects that are independent of the interactions (e.g. the boots in Fig 5 or appendix, the "hand bumping" claim is over stated).
> Improvements that may come from more expressive mesh conditioning is only speculated.
> The body proportions follow the conditioning mesh and the rebuttal shows that it can be controlled. However, one would expect to distill such information from the model. A method that doesn't rely on the mesh conditioning may do this better. The current framework would require a separate system for estimating body proportions, which may not be trivial for non-realistic human characters.
---
Reply to Comment 1.1.1:
Comment: Thank you for the valuable feedbacks and support!
**Interactions.** Due to the use of far camera views when finetuning scenes with interactions, the quality improvements shown in the current manuscript are mainly in interaction-independent aspects like boots. More convincing results can be obtained by focusing camera views on areas of interactions (e.g., holding hands) while fine-tuning.
**Body proportion control.** The current framework still needs to manually tune the shape parameters of SMPL for body proportion control. But considering that SMPL and mesh rendering can be differentiable, it is feasible to automatically adjust shape parameters with SDS gradients, which remains as future work. | Summary: This work proposes a method for text-driven human avatar generation. It combines animatable human nerf and diffusion model to implement avatar generation and animation. Extensive experiments demonstrate that its performance outperforms existing works. Also, this work supports avatar-avatar, avatar-object, and avatar-scene interactions.
Strengths: 1. This is among the first diffusion-based works that can generate animatable 3D avatars, which also support the interactions between avatars and scenes/objects.
2. The occlusion culling method is well-motivated and it can address the multi-face issue effectively together with the carefully selected text prompt for viewpoints.
3. Experiments show SOTA performance for text-driven avatar generation and animation.
Weaknesses: 1. This work is a little bit overclaimed (line 49-51) because there exist several prior works which can generate animatable 3D avatars with complex shapes and appearances, such as [1,2].
2. The authors claim that DreamWaltz is able to make a scene with diverse interactions across avatars, objects, and scenes. However, it is difficult for me to evaluate whether this point is technically challenging because there is no specific design or module to enable these interactions in the proposed framework. Please explain more about this point because it is the main contribution of this work as it is mentioned in the title.
3. The proposed framework is similar to AvatarCraft, the only difference is the SMPL inverse skinning and ControlNet.Please explain more about the differences between DreamWaltz and AvatarCraft.
4. It is confusing to classify AvatarCraft as not animatable (Table 1) because AvatarCraft can also deform the canonical avatar to different target poses in the observation space. I believe by using SMPL, DreamWaltz can only change the pose parameters to implement avatar animation, which is quite similar to AvatarCraft.
5. The authors might use the term 'generalizable NeRF' carefully because when we use 'generalizable NeRF', it usually refers to the NeRF model that is not overfitted to one 3D scene but can generalize to any inputs instead. It is highly recommended to think about this term again to avoid confusion or ambiguity because the NeRF model which can be deformed to any target pose is usually named as deformable/animatable instead of generalizable.
6. I think this work is technically solid and has good performance, but there exists large improvement room for the current manuscript.
[1] EVA3D: Compositional 3D Human Generation from 2D Image Collections.
[2] Avatargen: a 3d generative model for animatable human avatars.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: How to implement avatar-object and avatar-scene interaction? Please introduce more details for these two applications.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Not addressed. There is no discussion of broader societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors are grateful for the detailed and in-depth feedback from the Reviewer. We have substantially revised the manuscript as suggested by the reviewer. Below we address the mentioned concerns separately.
### **Q: Overclaiming in Line 49-51: “for the first time capable of generating avatars with complex shapes and appearance.”**
**A:** We apologize for the misleading statement in Line 49-51: “for the first time capable of generating avatars with complex shapes and appearance, ready for high-quality animation and interaction”. Our work focuses on the challenging text-to-avatar generation using pre-trained vision-language models, different from EVA3D and AvatarGen which targets a specific avatar domain using fashion images. Existing works of AvatarCLIP, AvatarCraft, and DreamAvatar are more comparable. Compared to these works, our work can create both complex (non-SMPL-like) and animatable avatars from text, which is why we claim "for the first time". We appreciate the reviewer's comment and will revise the statement in Line 49-51.
### **Q: More explanation about making a scene with diverse interactions.**
**A:** Making a scene with diverse interactions is challenging because it requires a deep understanding of the intricate interplay between body poses, objects, shape, and proximity. These interactions are hard to model by hand, and a recent work [1] later than ours proposes to learn human interaction priors from large image collections. Our work similarly employs image priors (where social interactions are abundant) from a pretrained diffusion model to enable more realistic scene interactions. Specifically, we use the proposed 3D-consistent SDS for composite scene NeRF fine-tuning. The scene-specific SDS gradients from the diffusion model can enhance the visual quality of the scene involving avatars and interactions, for example, make “hands bumping” effects more realistic, as shown in Fig. 5 of the supplementary material.
[1] Generative Proxemics: A Prior for 3D Social Interaction from Images. Arxiv 2023.
### **Q: Differences between DreamWaltz and AvatarCraft.**
**A:** AvatarCraft and our DreamWaltz are both two-stage approaches, where a canonical avatar is first created and then animated, but the details and results are significantly different.
The biggest disagreement is whether animation is learnable. AvatarCraft only borrows the inverse-LBS from SMPL to animate the implicit field, with a hard-coded mask function to filter out the points far from the template mesh. Also based on inverse-LBS, our DreamWaltz allows the implicit field and mask function to be further learned under the supervision of rich image prior: the controlnet supervision from any pose condition. This improvement is not trivial, as it gets rid of the over-reliance on the SMPL mesh topology, allowing us to animate avatars with more complex (non-SMPL) shapes (as shown in Fig. 7(c) of our paper), which is crucial for imaginative text-driven avatar creation.
Besides animation, other differences lie in canonical avatar creation and scene making. Thanks to our proposed 3D-consistent SDS loss, we could use a more concise pipeline to create canonical avatar without coarse-to-fine and multi-bbox training, achieving faster training speed (only 46.2% training time of AvatarCraft, using one NVIDIA A100 GPU) and comparable visual quality.
We also explore scene making with interactions across avatars and objects (which is not discussed by AvatarCraft), finding that scene-specific fine-tuning with our proposed 3D consistent SDS can further eliminate artifacts and make interactions more realistic.
### **Q: Why classify AvatarCraft as not animatable in Table 1?**
**A:** We gently remind the reviewer that we classify AvatarCraft as animatable in Table 1, where DreamAvatar is classified as not animatable. Although DreamAvatar allows pose control, the target pose needs to be pre-determined because it requires retraining for each pose control and thus is impractical for animation due to the inefficiency (takes two hours for a novel pose) and inconsistency (the appearance of the avatar in different poses may be changed, as shown in Fig. 1 of the rebuttal pdf). In fact, DreamAvatar doesn't claim to be able to animate avatars, nor does it provide animation results. For the above reasons, we believe DreamAvatar can be safely classified as not animatable.
### **Q: Using SMPL, DreamWaltz is quite similar to AvatarCraft.**
**A:** Yes, we admit that DreamWaltz can change the pose parameters to implement rough avatar animation, which is quite similar to AvatarCraft. However, DreamWaltz can animate complex avatars like “Woody with cowboy hat”, while AvatarCraft can only animate SMPL-like avatars such as “Woody” with a bald head, as shown in Fig. 7(c) of our paper.
### **Q: Concern about the term “generalizable NeRF”.**
**A:** We sincerely appreciate the reviewer's suggestion and will change the term "generalizable NeRF" to "deformable NeRF" in the revision.
### **Q: There exists large improvement room for the current manuscript.**
**A:** We thank the reviewer for the positive comments on the soundness and performance of our work. The current manuscript will be carefully polished. DreamWaltz solves the incompatibility between complex structure (non-SMPL topology) and animation in the existing text-driven avatar creation works, and explores making scenes with diverse interactions. We sincerely hope that our responses could address the reviewer’s concerns.
### **Q: How to implement avatar-object and avatar-scene interaction?**
**A:** Please refer to Q4 in the global rebuttal for these implementation details.
### **Q: Missing Limitations and Societal Impacts.**
**A:** We have supplemented the discussion of limitations and societal impacts in the global response.
---
Rebuttal Comment 1.1:
Comment: Thanks for the answers. The authors have addressed most of my concerns. However, based on the rebuttal, I still find the avatar-object and avatar-scene interaction not technically challenging enough. So, it is not reasonable to claim these points as the main contribution or novelty. Again, the work itself is good and I believe there exists ample improvement room in the paper's presentation. I will keep my rating based on the current manuscript.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's efforts and valuable comments.
Making a scene with avatars and interactions is an extremely tricky task due to ambiguous supervision under the zero-shot setting. Our work contributes to **visual quality** (*better than previous text-to-avatar works with shorter generation time*), **animation** (*able to animate complex avatars like "Mobile Suit Gundam" in Fig. 2 of Supp. Material, which has never been shown in previous works*), and explores improvements to **interactions** which are *effective and easy to use (same framework as used for avatar generation)*.
We thank the reviewer again for the recognition of our work itself, and believe that the deficiencies in the paper's presentation can be resolved in time and will not affect the contributions. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers and ACs for their time and efforts. Below we provide the responses to some frequently asked questions and main concerns, as well as the discussions of limitations and societal impacts.
### **Q1: Contribution of our work w.r.t previous methods.**
**A:** Previous methods for text-driven avatar generation include AvatarCLIP [1], AvatarCraft [2], and DreamAvatar [3]. Compared to them, we achieve both complex (non-SMPL-like) and animatable avatar generation for the first time. Besides, we explore how to make scenes with diverse interactions, where our framework can be used for scene-specific fine-tuning to further improve visual quality and interaction realism with 2D prior where social interactions are abundant .
### **Q2: Motivation of our animation learning.**
**A:** Text-driven avatar generation is zero-shot and cannot obtain animation information from video data, so existing works such as AvatarCLIP and AvatarCraft rely on SMPL's inverse skinning for animation. However, SMPL only describes naked human bodies and is not suitable for animating avatars with complex appearances. The core idea of our animation method is to distill pose-dependent appearance knowledge from the pretrained diffusion model for deformation learning, which effectively compensates for the deficiency of SMPL-based animation.
### **Q3: Why is DreamAvatar classified as non-animatable?**
**A:** DreamAvatar allows posed-avatar generation, however this requires retraining (takes about 2 hours) for each new target pose and cannot guarantee a consistent appearance (as shown in Fig. 1 of the rebuttal pdf), making DreamAvatar non-animatable.
### **Q4: How to implement avatar-object and avatar-scene interaction?**
**A:** Implementation details of these two interactions are given as follows. We will add these details in the revision.
**Avatar-object interaction.** The whole training process of avatar-object interaction is actually the same as pure avatar generation, except that the text prompt needs to add object descriptions. For example, we change “Lara Croft” to “Lara Croft with weapons” and change “Kobe Bryant” to “Kobe Bryant with basketball”, obtaining the results in Fig. 1(c) of our paper. In fact, our method can further achieve avatar-object animation, as shown in Fig. 2 of the rebuttal pdf.
**Avatar-scene interaction.** First, we learn the animatable avatar NeRF representation (e.g., “Woody”) via Stage I + II of DreamWaltz, and train the static scene NeRF representation (e.g., “a chair made of cheese”) via Latent-NeRF. Then, both the avatar and the static scene NeRFs are aligned manually and can be rendered by the same camera. We introduce an extra fine-tuning stage as mentioned in sec 3.2.3, utilizing the proposed 3D-consistent SDS loss to fine-tune the hybrid avatar-scene NeRFs and the introduced density weighting network. Here the SDS loss is scene-specific, since we condition the ControlNet on the scene-specific textual description (e.g., “Woody sitting in a chair made of cheese and applauding”) and skeleton images (e.g., random frames from the “sitting and clapping” motion sequence). It takes 30,000 iterations for the extra fine-tuning stage, as described in L78-L79 in the supplementary material pdf.
### **Limitations**
Although DreamWaltz can generate SOTA high-quality complex avatars from textual descriptions, the visual quality can be significantly improved with higher resolution training at higher time and computation cost. The quality of face and hand texture can be further improved through dedicated optimization of close-up views as well as adopting SMPLX instead of SMPL for ControlNet conditioning.
The body portions can be freely controlled via specifying SMPL model parameters, for future work we could train a SMPL shape generative model to facilitate the textual control of SMPL body shapes.
Animation relies on distilling 2D prior depicting randomly selected poses into NeRF representations and the introduced NeRF re-weighting module. Extended training durations tend to yield heightened animation quality and fewer artifacts. However, determining the optimal training duration a-priori remains challenging. To address this issue, we intend to investigate metrics conducive to quantifying training convergence.
Following the optimization-based approach akin to DreamFusion, the generation of an avatar necessitates a dedicated optimization procedure for each textual input, consuming approximately an hour. In the interest of expediting the avatar creation process for arbitrary text inputs, we are poised to investigate techniques such as modulation to foster generalization, thereby enabling swifter avatar acquisition.
### **Societal Impacts**
Societal impact follows prior 3D generative works such as DreamFusion [4] and AvatarCraft.
Given our utilization of Stable Diffusion (SD) as the 2D generative prior, our model could potentially inherit societal biases present within the vast and loosely curated web content harnessed for SD training. We strongly encourage the usage to be transparent, ethical, non-offensive and a conscious avoidance of generating proprietary characters.
It is important to acknowledge that generative models such as ours may have implications for the displacement of creative workers through automation. Nevertheless, DreamWaltz is intended to be a tool to liberate designers and animators from laborious and repetitive work, thereby to focus on intellectual creativity and to enhance accessibility.
### **Reference**
[1] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars. SIGGRAPH 2022.
[2] AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control. ICCV 2023.
[3] DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models. Arxiv 2023.
[4] DreamFusion: Text-to-3D using 2D Diffusion. ICLR 2023.
Pdf: /pdf/b4c467495198c7c180d868bef45117a268762a0b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Convergence analysis of ODE models for accelerated first-order methods via positive semidefinite kernels | Accept (poster) | Summary: The article presents a continuous-time optimization framework for convex objectives which streamlines the process of giving rate guarantees. Beginning from (Exact PEP), a difficult looking infinite-dimensional optimization problem. This is relaxed using convexity, and then recast in a dual form, wherein it suffices to verify the positive definiteness of a PEP kernel. This mirrors the discrete PEP kernel of Drori and Teboulle, and the authors show that this continuous PEP kernel can in fact be derived as a certain limit of the discrete one.
The article then deploys its method on a variety of accelerated ODE methods (Nesterov methods, triple momentum, information-theoretic exact methods, and others). It additionally considers a variety of standard metrics, as appropriate for different situations (function value convergence, norm of gradient convergence). The rates given appear to be consistent to state of the art continuous-time guarantees or mirroring discrete-time guarantees
The method, which is designed to avoid construction of Lyapunov functions, instead requires construction of a positive definite kernel, which is built from the choice of a Lagrange multiplier function $\Lambda.$
Strengths: 1) The article is very clearly presented, with a large set of examples to illustrate the method. This in particular important for illustrating the search for the magic $\Lambda.$
2) Certifying that the rate guarantee is correct, given the presented H-kernel (which can be computed directly from the ODE), and given the Lagrange multiplier is simple.
3) The article improves some existing rate guarantees and appears to reproduce many of the best-known guarantees, presumably found through more usual Lyapunov functional methods. In this sense, the article identifies exactly the way in which these Lyapunov functional estimates are optimal (in the sense that the c-PEP guarantees produced this way follow (5) -- see Remark 1).
4) The method is extended systematically to give convergence of gradients (section 4).
Weaknesses: 1) While the paper removes the need to produce a Lyapunov function, it introduces the need to produce the Lagrange multiplier function $\Lambda$. This does not appear to be systemized, and so it raises the obvious question to what extent it is necessary to learn the c-PEP machinery only to search for the $\Lambda$ when one could just search from the beginning for the more intuitive Lyapunov function.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1) Is there a systematic way that you produce the Lagrange multiplier function? If so, this should be more prominently displayed.
2) The exact PEP is relaxed and them seems to play no further role. Can you provide any insight in the degree to which this relaxation is sharp? (Are there examples where it is solvable?). Is there anything to say about Exact PEP at all, except for the relaxation?
3) Can you reverse engineer the c-PEP guarantee to produce optimization methods?
4) All the rate guarantees (for say the strongly convex case) look largely equivalent. Is the best possible rate guarantee (say asymptotically) -- given a class of ODE methods with $a\sqrt{\mu}$ is second coefficient and $1$ its final coefficient $e^{-c(a)\sqrt{\mu}T}$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The article appropriately addresses its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and thoughtful comments.
> While the paper removes the need to produce a Lyapunov function, it introduces the need to produce the Lagrange multiplier function $\Lambda$. This does not appear to be systemized, and so it raises the obvious question to what extent it is necessary to learn the c-PEP machinery only to search $\Lambda$ for the when one could just search from the beginning for the more intuitive Lyapunov function.
> Is there a systematic way that you produce the Lagrange multiplier function? If so, this should be more prominently displayed.
While not explicitly mentioned in the paper, there is a rule of thumb for choosing $\Lambda$ (or $\lambda$). When the expected convergence rate is $O(1/a(T))$, we found that setting $\Lambda(t)=a(t)/a(T)$ in Theorem 1 (or $\lambda(t)=a(T)/a(T-t)$ in Theorem 2) leads to the desired results. For instance, in Line 165, because the known convergence rate of AGM-SC ODE is $O(1/e^{\sqrt{\mu}T})$, we set $\Lambda(t)=e^{\sqrt{\mu}t}/e^{\sqrt{\mu}T}=e^{\sqrt{\mu}(t-T)}$. In the final version of our work, we will include a formal explanation of this rule.
> The exact PEP is relaxed and them seems to play no further role. Can you provide any insight in the degree to which this relaxation is sharp? (Are there examples where it is solvable?). Is there anything to say about Exact PEP at all, except for the relaxation?
Dealing with the exact PEP itself is a highly challenging task, and to the best of our knowledge, there is no prior research exploring this direction, even in the discrete-time case. For the discrete PEP framework, [27] showed that using an interpolation argument, the exact PEP can be relaxed into a tractable form, and this relaxation is exact, meaning that the relaxed PEP is equivalent to the exact PEP. However, the relaxation technique applied in our continuous PEP framework mirrors the relaxation technique used for the discrete PEP in [2], which is known to be not tight (see [27, Section 1.4] for a related discussion). It could be an interesting future direction to find an exact tractable reformulation of the exact continuous PEP.
> Can you reverse engineer the c-PEP guarantee to produce optimization methods?
In the literature, there has been a line of work [7, 26] for producing optimal optimization methods by reverse engineering the discrete PEP presented in [2]. As our continuous PEP serves as a continuous-time counterpart of the discrete PEP, it can be used to produce optimal ODE models. Given a family of continuous-time models (4), parametrized by the kernel $H(t,\tau)$, we denote the best achievable convergence guarantee as $\mathrm{Guarantee}(H)$. The task of finding the optimal ODE model is then formulated as $\min_{H}\mathrm{Guarantee}(H)$. Using the continuous PEP, we can express $\mathrm{Guarantee}(H)$ as $\mathrm{Guarantee}(H)=\min_{\lambda}\mathrm{Dual}(H,\lambda)$, where $\mathrm{Dual}$ is the dual objective function defined in (7). Now, the task of finding the optimal ODE model can be formulated as $\min_{H,\lambda}\mathrm{Dual}(H,\lambda)$. One can generate the optimal ODE models by solving this problem, although it may not be a trivial task.
> All the rate guarantees (for say the strongly convex case) look largely equivalent. Is the best possible rate guarantee (say asymptotically) -- given a class of ODE methods with $a\sqrt{\mu}$ is second coefficient and 1 its final coefficient $e^{-c(a)\sqrt{\mu}T}$?
We can partially answer this question as follows: The ODE model $\ddot{X}+a\sqrt{\mu}\dot{X}+\nabla f(X)=0$ achieves a convergence rate of $O(e^{-(a-1)\sqrt{\mu}T})$, although we do not guarantee that this convergence rate is the best possible. Proof sketch: This ODE model is equivalent to the second Bregman Lagrangian flow in [30] with $\alpha(t)=\log\sqrt{\mu}$ and $\beta(t)=(a-1)\sqrt{\mu}t$. As a result, it achieves a convergence rate of $O(e^{-\beta(T)})=O(e^{-(a-1)\sqrt{\mu}T})$.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
The author-reviewer discussion period is closing soon, so could you please go over the authors' rebuttal and respond with a message to the authors? It is important that authors receive a reply to their rebuttals, as they have tried to address comments raised by the reviewers.
Best regards,
AC | Summary: This paper proposes a novel methodology that analyzes ODE models for first-order optimization methods by converting the task of proving convergence rates into verifying the positive semidefiniteness of specific Hilbert-Schmidt integral operators. Based on the performance estimation problems (PEP) and functional analysis, the authors establish convergence rates of various accelerated gradient flow models. The authors’ continuous time PEP framework provides insights into the analysis of the discrete-time PEP.
Strengths: 1. The authors developed a novel and simple framework for analyzing the convergence rate of continuous-time dynamics via positive semidefinite kernels.
2. The authors bridge the gap between the PEP framework for the continuous and discrete settings from continuous-time dynamics.
3. the authors’ continuous time PEP framework provides new opportunities for the analysis of the discrete-time PEP.
Weaknesses: 1. The results of this paper lack experimental validation.
2. The verification of the positive semidefiniteness of an integral kernel is difficult in implementation. How do you solve the difficulty in the calculation for the integral kernel?
3. There is a misspelling in line 39.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and thoughtful comments.
> The results of this paper lack experimental validation.
In the initial submission of our paper, we did not include experimental validations, as our primary focus is to provide a new theoretical framework for convergence analysis of ODE models, rather than developing novel algorithms or ODE models. Additionally, most of the obtained convergence guarantee itself, or the corresponding discrete-time convergence guarantee, is already well-known in the literature. However, the convergence guarantee of AGM-SC ODE with respect to the measure $\Vert\dot{X}(T)\Vert^{2}$ is novel. We have performed experiments for this guarantee. The results can be found in Figure 3 of the attached PDF and will be included in the final version.
> The verification of the positive semidefiniteness of an integral kernel is difficult in implementation. How do you solve the difficulty in the calculation for the integral kernel?
We believe this concern does not significantly impact our contributions. Our work primarily focuses on establishing a theoretical foundation, rather than dealing with implementation. Due to the simplicity of our continuous PEP framework, compared to its discrete counterpart, all the results in our paper can be derived manually, without the need for numerical solvers.
However, we agree that the implementation of continuous PEP could be a possible future direction. An integral kernel can be approximated with an arbitrarily small accuracy \epsilon by finite-rank integral operators (see Townsend & Trefethen, 2013), and one can readily verify the positive semidefiniteness of finite-rank operators. Thus, one can numerically verify the positive semidefiniteness of the given integral kernel, with an appropriate care of numerical approximation errors.
> There is a misspelling in line 39.
In Line 39, "typiccally" should be corrected to "typically". Thank you for the catch.
## Reference
Townsend, A., & Trefethen, L. N. (2013). An extension of Chebfun to two dimensions. SIAM Journal on Scientific Computing, 35(6), C495-C518.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. Your answers partially cleared my confusion. So I keep my rating. | Summary: This paper presents a framework for analyzing convergence rates of a class of ODE models via the continuous-time performance estimation problem (PEP). The task of solving the PEP problem is relaxed into verifying the positive semidefiniteness of specific integral operators. The convergence rates of several accelarated gradient flow models were estabilished using the proposed method.
Strengths: The novelty of the paper lies in the approach of defining and solving the continuous-time PEP problem to analyze ODE models. Under change of variables and Lagrangian duality, it is sufficient to verify the positive definiteness of specific integral operators. Using this method, the authors recover the known convergence guarantee and reveal unknown convergence rate for accelarated gradient flow models. The paper is clearly written and well-organized.
Weaknesses: The intuition to consider the PEP kernel is not clear. The choices of multiplier function $\Lambda$ and $\nu$ are constructive and tricky for ODE models shown in section 3.3 (after Theorem 1) and section 4.2 (after Theorem 2). If given an ODE model as (4), it seems hard to distinguish if it can fulfill the assumptions in Theorem 1 and Theorem 2.
Lyapunov analysis for the discrete algorithms is much harder than the continuous dynamic system. The proposed approach only works for the continuous problems and didn't help the discrete case.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: There are some questions for technical unclarity.
- Does the paper consider only the strongly convex $f(x)$? If it is, then the minimizer $x^*$ is unique, I suggest the refer $x^*$ as 'the minimizer' rather than 'a minimizer' throughout the paper; if it is not, for convex $f(x)$, how to guarentee that the minimizer $x^*$ exists? For example, $f(x) = e^{-x}$. How to interpret the result of Theorem 1 in this case without defining $x^*$?
- For Exact PEP formulation in line 105, why the initial condition is set as $\dot X(0) =0$? If $x_0 = x^*$, then there is zero division issue, while this was not a problem in formulation (3). I suggest the authors adapt this case into the Exact PEP formulation.
- The computation stated in line 132 is not obvious. I suggest the authors include the outline in the paper.
- Suggest to outline the proofs of Theorem 1 and Theorem 2 in the paper.
- Can the convergence guarentee in line 155 imply the convergence guarantee of the ODE model? In what sense? For instance, as a extreme case, when $f(x) = \frac{\mu}{2}\|x - x^*\|^2$, $\tilde f(x) - \tilde f(x^*)$ is always zero, then Line 155 becomes trivial inequality.
- Is it possible that the supremum in the inequalities (15) is infinity and the inequality is trivial? For example, $\mu = 0$ and $f(x)$ is not bounded below.
- What is $x^T$ in equation (20) and (21)?
- Typos: line 45, 'enhances'; page 5, footnote 3 line 2, 'space'.
- Please check the format of the reference list, particularly the letter case (in [25], [27], [30], etc).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments.
> The intuition to consider the PEP kernel ...
We want to clarify that selecting the multiplier functions is not such a challenging task. Although not explicitly mentioned in the paper, there is a rule of thumb for choosing $\Lambda$ (or $\lambda$). When the expected convergence rate is $O(1/a(T))$, we found that setting $\Lambda(t)=a(t)/a(T)$ in Theorem 1 (or $\lambda(t)=a(T)/a(T-t)$ in Theorem 2) leads to the desired results. For instance, in line 165, because the known convergence rate of AGM-SC ODE is $O(1/e^{\sqrt{\mu}T})$, we set $\Lambda(t)=e^{\sqrt{\mu}t}/e^{\sqrt{\mu}T}=e^{\sqrt{\mu}(t-T)}$. Once the appropriate $\Lambda$ (or $\lambda$) is determined, it is not difficult to select the parameter $\nu\in\mathbb{R}$ that makes the PEP kernel positive semidefinite. In the final version of our work, we will include a formal explanation of this rule.
> If given an ODE model as (4), ...
In our analysis, checking whether the given ODE model satisfies the assumptions in Theorem 1 and Theorem 2 is indeed straightforward. Once the multiplier function $\Lambda$ (or $\lambda$) is chosen, the verification process involves only computing the PEP kernel and checking if it is positive semidefinite.
Regarding the soundness of assumptions, Theorem 2 in the initial submission of our paper relied on a nontrivial assumption (16). Fortunately, we have successfully relaxed this assumption, and the revised version will be included in the final version of our paper. In the general response, we provide a detailed explanation of the modification we made.
> Lyapunov analysis for the discrete algorithms ...
While it is true that discrete-time analysis can be more challenging than continuous-time analysis, we disagree that continuous-time analysis does not help discrete-time analysis. In fact, our continuous-time PEP framework can provide guidance for analyzing the discrete PEP. In the discrete PEP, finding an appropriate multiplier vector $\lambda_{i}$ can be quite challenging. However, this task becomes more approachable in continuous-time analysis, where we need to find a multiplier function $\lambda(t) $ for the corresponding continuous PEP. Once we have a multiplier function $\lambda(t)$ that works for the continuous PEP, we can discretize this $\lambda(t)$ to obtain candidates for multiplier vectors $\lambda_{i}$ that work for the discrete PEP (see Appendix G.2.2 for a specific example and related discussion). Furthermore, discretizing a multiplier function $\lambda(t)$ is typically simpler than discretizing a Lyapunov function. As a result, transitioning from continuous-time analysis into discrete-time analysis is more straightforward in the PEP framework than in traditional Lyapunov analysis.
> Does the paper consider only the strongly convex ...
In our paper, we assume the existence of a minimizer $x^*$ (we will make this clearer in the final version). This assumption is standard in the literature on the analysis of accelerated first-order methods and can be found in Nesterov's seminal paper on AGM [15] as well as recent works [22,19] published in NeurIPS.
> For Exact PEP formulation in line 105, ...
The initial condition $\dot{X}(0)=0$ is a part of the continuous-time model (AGM ODE), and its derivation can be found in Su et al.'s paper [23]. Note that without having an initial condition for both $X(0)$ and $\dot{X}(0)$, the solution to AGM ODE is not uniquely determined. In the final version, we will mention the initial condition immediately after introducing AGM ODE for the first time.
> If $x_0=x^*$, then there is zero division issue, ...
While it is desirable to avoid such an issue, we don't need to worry about this. The case $x_0=x^*$ leads to $X(t)=x^*$ for all $t$, making the situation so trivial that we can safely exclude this case from our analysis. Also, note that dividing by $\|x_0-x^*\|$ can be commonly found in the PEP literature [2,26].
> Can the convergence guarentee in line 155 ...
The convergence guarantee in line 155 is for $\tilde{f}(X(T))-\tilde{f}(x^*)=f(X(T))-f(x^*)-\frac{\mu}{2}\|X(T)-x^*\|^{2}$, rather than for $f(X(T))-f(x^*)$. While it is possible to formulate a continuous PEP with the performance measure as $f(X(T))-f(x^*)$ (as done in the discrete-time case in [26, Appendix D]), we decided to use $\tilde{f}(X(T))-\tilde{f}(x^*)$ as the performance measure because it makes the construction of the PEP kernel more natural. Another compelling reason is its relevance to the well-known convergence rates for the Triple Momentum Method (TMM) and the Information-Theoretic Exact Method (ITEM), we aim to analyze whose ODE models in our paper. The known Lyapunov functions for these methods (see [26, Section 2]) yields convergence rates for the quantity $f(x_{N})-f(x^*)-\frac{\mu}{2(1-\mu/L)}\|x_{N}-x^*\|^{2}$, not for $f(x_{N})-f(x^*)$. The continuous-time counterpart for this quantity is preicsely $\tilde{f}(X(T))-\tilde{f}(x^*)$.
> Is it possible that the supremum ...
In the literature on minimizing gradient norm of convex functions following the work [8], it is standard to assume the initial function condition (IFC), that is, $f(x_0)-f(x^*)$ is bounded by some constant. In our work, we assume that the supremum in the inequality (15) is finite, and this assumption can be considered as a natural extension of (IFC) to the strongly convex setting.
> What is $x^{T}$ in equation (20) and (21)?
$x^{T}:=X(T)$. We will clarify this in the final version.
> Typos: line 45, 'enhances'; page 5, footnote 3 line 2, 'space'.
Thank you for noting these typos. We will fix them in the final version.
The reviewer has also provided valuable suggestions, which will be incorporated into the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your replies to my questions and updates to the manuscript. The primary concerns I had have been adequately addressed and clarified. As a result, I have adjusted my score to 5 accordingly. | Summary: This paper provides a new technique (that is fundamentally different from the Lyapunov approach) for systemically analyzing the convergence rates of ODE models for first-order optimization methods, which reduces to verifying the positive semidefiniteness of specific Hilbert-Schmidt integral operators. This is a continuous time version of the Drori and Teboulle's (discrete) performance estimation problem (PEP) approach, which has recently become a fundamental tool in systemically analyzing the convergence rates of first-order methods. As a verification, the authors utilized the proposed tool for analyzing various ODE models of accelerated first-order methods such as (unified) AGM, TMM, ITEM, AGM-G and OGM-G, which either recovered the existing convergence rates or revealed new rates for various ODE models.
Strengths: - This provides a new systematic analysis of convergence rates of the ODE models of first-order methods, which will be potentially useful for analyzing new ODE models. The authors have successfully verified the effectiveness of the tool for various ODE models.
- This continuous PEP resembles the discrete PEP, but its construction via the functional analysis is certainly not trivial and is very interesting.
- This provides new insights to the PEP analysis.
Weaknesses: - One of the important features of the discrete PEP is that one can numerically find the values of Lagrange multipliers by numerically optimizing them, which can be used to reveal an analytical form of Lagrange multipliers. However, this paper's continuous PEP requires one to explicitly have the appropriate Lagrange multipliers, which can be laborious especially when dealing with new ODE models.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Line 132: How about letting the readers know that the derivation can be found in Appendix B?
- Line 180: Is this rate consistent with the rate in [9]?
- Appendix Line 75: $K^1$? $K^d$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations:
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and thoughtful comments.
> One of the important features of the discrete PEP is that one can numerically find the values of Lagrange multipliers by numerically optimizing them, which can be used to reveal an analytical form of Lagrange multipliers. However, this paper's continuous PEP requires one to explicitly have the appropriate Lagrange multipliers, which can be laborious especially when dealing with new ODE models.
This concern does not significantly impact our contributions at this point. Our work primarily focuses on establishing a theoretical foundation, rather than dealing with implementation.
Although it is not the main focus of our paper, the implementation of continuous PEP is a possible future direction. It would be necessitate to implement an optimization problem in function spaces, which might involve approximating continuous-time functions using a finite set of basis functions, with an appropriate care of numerical approximation errors.
One can use the continuous PEP as a guiding framework for the discrete PEP. In this scenario, numerically optimizing Lagrange multiplier can be done in discrete PEP, and then taking the limit of stepsize to guess an explicit form of the working multiplier function for the continuous PEP.
> Line 132: How about letting the readers know that the derivation can be found in Appendix B?
Thank you for the suggestion. We will incorporate this into the final version of our paper.
> Line 180: Is this rate consistent with the rate in [9]?
In [9, Corollary 8], a convergence rate of $f(X(T))-f(x^*)\leq O(\mathrm{csch}^{2}(\frac{\sqrt{\mu}}{2}T))$ is reported (note that the notation $\mathrm{cschc}$ in [9] is defined as $\mathrm{cschc}(t):=t\mathrm{csch}(t))$. While our convergence guarantee in Line 180 is not exactly the same as the one in [9] because our one is for $\tilde{f}(X(T))-\tilde{f}(x^*)$, both rates are consistence in the sense that both exhibits the rate $O(\mathrm{csch}^{2}(\frac{\sqrt{\mu}}{2}T))$ . We will make this point clear in the final version of our paper.
> Appendix Line 75: $K^{1}$? $K^{d}$?
As $K^{d}$ is already defined in Appendix Line 63, $K^{1}$ is right here. However, there is a typo in Line 75: $L^{2}([0,T];\mathbb{R}^{d})$ should be corrected to $L^{2}([0,T];\mathbb{R}^{1})$. Thank you for the catch.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I have read the rebuttal and comments. I have no further questions and will keep my score. | Rebuttal 1:
Rebuttal: Dear all reviewers,
# Figures
The attached PDF file contains the figures mentioned in the rebuttals: Visualization of PEP kernels, and numerical experiment for the convergence rate of AGM-SC ODE obtained in Section 4.
# Relaxing assumptions in Theorem 2
We have relaxed the assumption (16) in Theorem 2. This modification will not affect other parts of our paper, as the revised version is a generalization of the initial version.
## Theorem statement
Remove the assumption (16) and replace the transformation before line 226 with the following:
$g^{G}(t)=\lambda(t)g(t)-\int_{0}^{t}\dot{\lambda}(\tau)\nabla g(\tau)d\tau,$
where $g(t)=\nabla f(X(t))+\mu\int_{0}^{t}\int_{\tau}^{t}h^{G}(s,\tau)ds\nabla f(X(\tau))d\tau$.
(Note: we have shown that this transformation is equivalent to that in the initial submission.)
## Proof
We make the following modifications on Lines 103--117.
Remove Lines 103--108, 110.
Let $\bar{f}(x)=f(x)-\frac{\mu}{2}\|x-x_{0}\|^{2}$. Then, $\bar{f}$ is convex.
Change the definition of $\tilde{f}_{t}(y)$ as follows:
$ \tilde{f}_t(y)=\lambda(t)(\bar{f}(y)-\bar{f}(X(T)))$
$\qquad \qquad -\langle \int_0^t\dot{\lambda}(\tau)\nabla\bar{f}(X(\tau))d\tau,y-X(T)\rangle . $
Change the formula after Line 112 as follows:
$ \frac{\partial}{\partial t}\tilde{f}\_{t}(y)|_{y=X(t)} =\dot{\lambda}(T)\left(\bar{f}(X(t))-\bar{f}(X(T))-\left\langle \nabla\bar{f}(X(t)),X(t)-X(T)\right\rangle \right). $
Change the definition of $N(t)$ as follows:
$N(t)=\frac{1}{M}\left(\bar{f}(y)-\bar{f}(X(T))-\left\langle \nabla\bar{f}(X(t)),y-X(T)\right\rangle \right).$
Pdf: /pdf/069a64f248caab96e71fac3d94e9a97699bf40b0.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: Continuous counterpart of the work by Drori and Teboulle [2] was presented. Specifically, through the dual objective of the relaxed PEP in continuous time, the convergence rates of various ODEs were obtained. The analysis on the Lagrangian dual will lead to a dual solution based on a symmetric kernel of the Hilbert-Schmidt integral operator (Theorem 1). Some of the follow up results were new to the literature e.g., low-resolution ODE (as opposed to high-resolution ODEs proposed by Shi et al. [19]) for the TM, ITEM methods & velocity/gradient norm convergence rates (Theorem 2).
Strengths: 1- The work has a solid mathematical base. The proofs are well written and mostly easy to follow.
2- The rate for TM ODE and ITEM ODE matching their discretized algorithms.
3- Investigating the connection between the continuous-time PEP and its discrete counterpart in Appendix G.2
Weaknesses: 1- The future directions are not clearly specified or very vague. This will impact the contribution of the work to the Neurips community. In the "Questions", I have asked some questions (numbers 2,3,4,5) which can be as future directions or even increase the contribution of this work beyond the current status.
2- The presentation can improve e.g.
-In (Relaxed PEP): move subject to under max for better readability.
-In 145: Sinse is a typo.
-References 22 23 are the same.
-Appendix: 455 missed parenthesis for 70.
3- Limited literature review. More comprehensive literature review could be placed in the Appnedix to save space.
4- Inconsistency in some parts of the text; e.g., in 113 I expected to see the inequalities mentioned in 112, but I was noted with another representation of AGM ODE from [9]. Also, sudden shifts from convex cases to strongly convex ones can cause confusion.
5- Some parts could be moved to the appendix (e.g. recovering the famous known rates for ODEs) and instead report their results in a table. Then, the authors could use the additional space to draw conclusions, compare (like graphs and visual simulations for instant comparison), or even exploring other aspects of the proposed framework or ODEs (by e.g. answering some of the questions in the "Questions" section below).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1- Is the statement in 33[However ...]36 true? There has been a line of work which used positive-semidefiniteness to define tight Lyapunov functions (see [1]). This work however deals with a more general case which entails various ODEs through the choice of the kernel.
2- Following Q1, is it possible to use your framework to define tight Lyapunov functions? This relates to finding the analytical Lyapunov functions simulated in e.g. [2].
3- Is it possible to extend this work to the high-resolution ODE framework proposed by Shi to bypass the Lyapunov based proofs in their analysis?
4- Is there any connection between the low-resolution ODE for TMM proposed in this work and the high-resolution ODE proposed in [3]?
5- In authors' mind, what discretizations can recover the TMM and ITEM from the proposed TMM and ITEM ODEs?
References
[1] Sanz-Serna, Jesús María and Konstantinos C. Zygalakis. “The connections between Lyapunov functions for some optimization algorithms and differential equations.” SIAM J. Numer. Anal. 59 (2020): 1542-1565.
[2] Upadhyaya, M., Banert, S., Taylor, A.B., & Giselsson, P. (2023). Automated tight Lyapunov analysis for first-order methods.
[3] B. Sun, J. George and S. Kia, "High-Resolution Modeling of the Fastest First-Order Optimization Method for Strongly Convex Functions," 2020 59th IEEE Conference on Decision and Control (CDC), Jeju, Korea (South), 2020, pp. 4237-4242, doi: 10.1109/CDC42340.2020.9304444.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are stated in the text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q2.** In Summary: Our continuous PEP is intrinsically associated with certain Lyapunov functions. However, if you are asking about the conventional Lyapunov function argument, where the Lyapunov function takes specific forms like
$\mathcal{E}(t)=a(t)(f(X(t))-f(x^*)+b(t)\Vert Z(t)-x^{*}\Vert^{2},$
then the answer is no.
Let's see how our framework can be interpreted using a Lyapunov function. The continuous PEP presented in Sections 3.1 and 3.2 is related to the following Lyapunov function:
$\mathcal{E}(t):=\nu\Vert x_0-x^*\Vert^2+\int_{0}^{t}\lambda_{1}(s)\left(\dot{\varphi}(s)+\left\langle \gamma(s),\int_{0}^{s}H(s,\tau)\gamma(\tau)\,d\tau\right\rangle \right)ds\Vert x_0-x^*\Vert^{2}$
$\qquad\quad +\int_{0}^{t}\lambda_{2}(s)\left(\varphi(s)+\left\langle \gamma(s),v+\int_{0}^{s}\int_{\tau}^{s}H(\sigma,\tau)\,d\sigma\gamma(\tau)\,d\tau\right\rangle \right)dt\Vert x_{0}-x^{*}\Vert^{2},$
which is decreasing by its construction. When the multiplier functions form a feasible solution to (7), i.e., $\lambda_{1}(0)=0$, $\lambda_1(T)=1$, and $\dot{\lambda}\_1(t)=\lambda_{2}(t)$, we have $\mathcal{E}(T)=f(X(T))-f(x^{*})+Q(T)$, where
$\mathcal{Q}(T)=\left(\frac{1}{2}\langle K^{d}\gamma,\gamma\rangle+\langle\lambda_{2}(t)v,\gamma(t)\rangle+\nu\right)\|x_{0}-x^{*}\|^{2}.$
If the PEP kernel (8) is positive semidefinite, we can show $Q(T)\geq0$, following the argument in Appendix Lines 73–78. As a result, we obtain
$f(X(T))-f(x^*)=\mathcal{E}(T)-Q(T)\leq\mathcal{E}(T)\leq\mathcal{E}(0)=\nu\Vert x_0-x^*\Vert^{2}.$
In the conventional Lyapunov function argument, we have an expression for $Q(T)$ that is automatically nonnegative (for example, as being a sum of squared norms). In contrast, the Lyapunov argument corresponding to PEP requires showing $Q(T)\geq0$ by using the positive semidefiniteness of the PEP kernel.
**Q1.** As noted by the reviewer, there exists a line of work for systematically finding (conventional) Lyapunov functions. While we have cited some of these works in the related work section, we agree that mentioning this research direction around line 33 is appropriate. We will incorporate this in the final version of our paper.
Nevertheless, the statement in lines 33–36 remains valid. Our framework circumvents the need for designing Lyapunov functions, resulting in a distinctive and more widely applicable way to establish convergence guarantees. In our framework, we can address all possible convergence rates obtained through a weighted integral of (5), with each corresponding to a dual feasible solution to (7). This capability enables us to show the optimality of convergence rates, as exemplified in Remark 1. Such a claim cannot be made straightforwardly in works focused on Lyapunov functions.
**Q5.** TMM and ITEM can be expressed as the following fixed-step first-order method:
$x_{i+1}=x_{i}-\frac{1}{L}\sum_{j=0}^{i}h_{ij}\nabla f(x_{j}).$
The continuous-time counterpart of this form is the following dynamical system (4) (see [9, Section 2.4.2]):
$\dot{X}(t)=-\int_{0}^{t}H(t,\tau)\nabla f(X(\tau))\,d\tau,$
where $H(i\sqrt{s},j\sqrt{s})\approx h_{ij}$. In light of this, the discretization process can be understood as discretizing a kernel $H(t,\tau)$ into a matrix $[h_{ij}]$.
**Q3.** It seems that such an extension might not be straightforward. The low-resolution ODEs take on the form $\ddot{X}(t)+b(t)\dot{X}(t)+c(t)\nabla f(X(t))=0$, which can be rewritten as (4) by Proposition 4 in the appendix. However, the high-resolution ODEs for Nesterov's AGM involve a gradient correction term $\nabla^{2}f(X)\dot{X}$, which cannot be derived from (4). To extend our continuous PEP framework to the high-resolution ODE framework, one should first find a reasonable generalization of the dynamics (4) that can handle the term $\nabla^{2}f(X)\dot{X}$, while enabling the convergence analysis via positive semidefinite kernels. This task is not trivial. Therefore, we defer it to future work.
**Q4.** The high-resolution TMM ODE in [3] does not align with the low-resolution TMM ODE in our paper. This is due to the different choices of time step size. While we adopted a time stepsize of $1/\sqrt{L}$, [3] employed a time stepsize of $\sqrt{\alpha}$, where $\alpha=\frac{2-1/\sqrt{L/\mu}}{L}$ (the value of $\alpha$ depends on the specific algorithm). Although both choices make sense, our choice of stepsize $1/\sqrt{L}$ is more commonly employed in the literature and can be applied to any fixed-step first-order method $x_{i+1}=x_{i}-\frac{1}{L}\sum_{j=0}^{i}h_{ij}\nabla f(x_{j})$.
**W5.** Thank you for providing a valuable suggestion. In the final version of our paper, we plan to include figures that visually illustrate PEP kernels (see Figures 1 and 2 in the attached pdf), along with detailed explanations. Additionally, we will enhance the conclusion section to offer a more comprehensive discussion of our framework and its potential applications in future research.
**W1.** We will incorporate the reviewer's suggestions, as well as other possible future directions, e.g., obtaining optimal ODE model with the use of PEP.
**W2.** Thank you for your suggestions; we will address these issues.
**W3.** In our initial paper submission, we focused on including key papers related to our work, considering space limitations. Following your suggestion, we plan to utilize the appendix to provide a more comprehensive literature review. For instance, our literature review will cover papers related to PEP for finding tight Lyapunov functions and the generalization of PEP beyond the convex optimization setup.
**W4.** Thank you for your suggestions. To improve readability, around line 113, we will first express equation (5) in a more familiar form:
$0=\frac{d}{dt}( f(X(t))-f(x^*)) -\left\langle \nabla f(X(t)),\dot{X}(t)\right\rangle $
$0\geq f(X(t))-f(x^*)-\left\langle \nabla f(X(t)),x^*-X(t)\right\rangle .$
Also, before presenting Theorem 1, we will clearly mention that we consider the strongly convex case.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: I would like to thank the authors for their responses to my concerns and questions. I would like to add that regarding Q2 response it would be interesting to see if positive semi definiteness of (8) leads to structural constraints $Q(T)$. This might be an easy task to check for special cases like Nesterov's. Also, on author's response to Q5, I agree on their response and this is the systematic way of discretising any first-order method. However, in my understanding (and please correct me if I am wrong) the ODEs which were proposed for most of the methods in the paper (like the proposed TM method's ODE) are second-order ODEs and do not fit the class of ODE
$$\dot X(t)=-\int_0^{t}H(t,\tau)\nabla f(X(\tau))d\tau.$$
My question was mainly regarding discretisers like explicit Euler with updated gradient calculation or semi-implicit Euler (SIE). For example in [1] it was shown that the NAG is the SIE discretization of a high-resolution ODE, but as far as I know, this is not the case for low-resolution ODEs (like yours) and more complicated discretizers are needed for these ODEs like rate-matching in [2]. Do you think one can find a discretiser like rate-matching to recover TM method from the low-resolution ODE you proposed?
**References**
[1] B. Shi, S. S. Du, W. J. Su, and M. I. Jordan. Acceleration via symplectic discretization of high- resolution differential equations (2019)
[2] A. Wibisono, A. C. Wilson, and M. I. Jordan. A variational perspective on accelerated methods in optimization (2016)
---
Reply to Comment 1.1.1:
Comment: Thank you for your engagement in the discussion. We answer the questions and comments below.
> However, in my understanding (and please correct me if I am wrong) the ODEs which were proposed for most of the methods in the paper (like the proposed TM method's ODE) are second-order ODEs and do not fit the class of ODE (4): $\dot{X}(t)=-\int_0^t H(t,\tau)\nabla f(X(\tau))d\tau$.
We would like to clarify that all second-order ODEs indeed fall within the class of continuous-time dynamical systems of the form (4). In the appendix of our paper, Proposition 4 shows that a second-order ODE $\ddot{X}(t)+b(t)\dot{X}(t)+c(t)\nabla f(X(t))=0$ can be equivalently expressed as the integro-differential equation (4) with $H(t,\tau)=c(\tau)e^{-\int_{\tau}^{t}b(s)ds}$. For instance, the TM method (TMM)'s ODE $\ddot{X}+3\sqrt{\mu}\dot{X}+2\nabla f(X)=0$ can be transformed into the form (4), using $H(t,\tau)=2e^{-\int_{\tau}^{t}3\sqrt{\mu}ds}=2e^{3\sqrt{\mu}(\tau-t)}$.
> I would like to add that regarding Q2 response it would be interesting to see if positive semi definiteness of (8) leads to structural constraints $Q(T)$.
In our understanding, the answer is negative. It is a nontrivial task to obtain a Lyapunov function from the PEP kernel. In the conventional Lyapunov argument, we often have $\dot{\mathcal{E}}(t)<0$. Consequently, the expression of $\dot{\mathcal{E}}(t)$ as a quadratic functional of $\tau\mapsto\gamma(\tau)=\nabla f(X(\tau))$, i.e., the integral operator $K(t)$ for which $\dot{\mathcal{E}}(t)=-\langle\gamma,K(t)\gamma\rangle$, is involved in the PEP kernel. In fact, the PEP kernel $S$ can be obtained by integrating the kernel $K(t)$ over time and then adding the kernel associated with $Q(T)$, i.e., we have $S=\int K(t)dt+Q(T)$. However, by only knowing the PEP kernel $S$, one cannot determine the kernels $K(t)$ and $Q(T)$.
It is worth noting that the converse of your statement is true. In order to show $S\succeq0$, one can first show $K(t)\succeq0$ and $Q(T)\succeq0$, which directly follows from the structure of $\dot{\mathcal{E}}(t)$ (for example, a squared distance), and then use the fact that a weighted integral of positive semidefinite kernels is positive semidefinite.
> My question was mainly regarding discretisers like explicit Euler with updated gradient calculation or semi-implicit Euler (SIE). For example in [1] it was shown that the NAG is the SIE discretization of a high-resolution ODE, but as far as I know, this is not the case for low-resolution ODEs (like yours) and more complicated discretizers are needed for these ODEs like rate-matching in [2]. Do you think one can find a discretiser like rate-matching to recover TM method from the low-resolution ODE you proposed?
While this question seems tangential to our contributions, we can provide an answer.
Shi et al. (2019) showed that AGM-SC differs from the semi-implicit Euler scheme applied to the high-resolution AGM-SC ODE by only a factor of $\frac{1}{1-\sqrt{\mu s}}$. As the reviewer correctly noted, applying this discretization technique to low-resolution ODEs does not yield Nesterov's AGM. The reason is that the low-resolution ODEs do not capture the gradient descent step $x_{k}=y_{k-1}-s\nabla f(y_{k-1})$ in AGM. Thus, when discretizing low-resolution ODEs, it is essential to incorporate the gradient step into the naive discretization scheme.
The derivation of TMM ODE from TMM is provided in Appendix D.1. Reversing this procedure gives the methodology for discretizing TMM ODE. Applying the explicit Euler method to $\dot{Z}=\sqrt{\mu}(Y-Z-\frac{1}{\mu}\nabla f(Y))$ gives $z_{k+1}-z_{k}=\sqrt{\mu s}(y_{k}-z_{k}-\frac{1}{\mu}\nabla f(y_{k}))$. Applying the implicit Euler method to $\dot{Y}=2\sqrt{\mu}(Z-Y)$ gives $y_{k}-x_{k}=2\sqrt{\mu s}(z_{k}-x_{k})$, where $y_{k-1}$ is replaced by $x_{k}$. By incorporating the gradient step $x_{k}=y_{k-1}-s\nabla f(y_{k-1})$ and adjusting the coefficient $2\sqrt{\mu s}$ to $\frac{2\sqrt{\mu s}}{1+\sqrt{\mu s}}$, we recover TMM.
---
Please feel free to ask for further clarifications. | null | null | null | null | null | null |
Learning Regularized Monotone Graphon Mean-Field Games | Accept (poster) | Summary: The paper focuses on two fundamental problems in regularized Graphon Mean-Field Games (GMFGs). The first problem is to establish the existence of a Nash Equilibrium (NE) of any $\lambda$-regularized GMFG (for $\lambda \geq 0$). The second problem is to propose provably efficient algorithms to learn the NE in weakly monotone GMFGs. Regarding the first problem, this paper used weaker conditions than previous works analyzing unregularized GMFGs ($\lambda = 0$) or $\lambda$-regularized MFGs, which are special cases of $\lambda$-regularized GMFGs. To address the second problem, the paper proposes a discrete-time algorithm and derives its convergence rate solely under weakly monotone conditions. Furthermore, the paper develops and analyzes the action-value function estimation procedure during the online learning process, which is absent from algorithms for monotone GMFGs. The efficiency of the designed algorithm is corroborated by empirical evaluations.
Strengths: 1. It's quite fascinating to uncover the link between MFG and GMFG as explored in the paper. Notably, the use of lambda-regularized MFG in Proof 1 to find the NE of regularized GMFG provides an intuitive understanding.
2. PMD for the general function approximation is impressive. The decision to employ policy mirror descent adds an interesting dimension to the methodology.
The paper appears to be well-rounded and articulately composed. It has a clear presentation of its findings.
Weaknesses: The paper does have quite a few notations, but it's understandable considering the complexity of GMFG. I've got a few questions, which might also point out some areas in the paper that could be improved. I've put these questions and potential weaknesses together in the question section for easy reference.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I mainly work on Stochastic game theory, not mean field game, so some of my questions might be quite basic.
1. From the first strength, I'm curious about how an NE of the $\lambda$-regularized GMFG is built from an NE of the made-up $\lambda$-regularized MFG. What makes this possible? This is really interesting to me. Is this a common method for MFG and GMFG? Also, is step 3 in the proof of theorem 1, something new compared to the previous literature?
2. In stochastic game theory, I know that $\lambda$-regularizing usually gives a better landscape for optimization, making it easier to find (and define) NE in the regularized setup. Is it the same in the MFG setting?
3. Is the analysis linked to this paper? It'd be helpful to know the technical differences between [1] and this paper.
[1] Zhan, Wenhao, et al. "Policy mirror descent for regularized reinforcement learning: A generalized framework with linear convergence." SIAM Journal on Optimization 33.2 (2023): 1061-1091.
If this is well-addressed, I am down to re-evaluate this paper.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitation and potential negative societal impact on their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. We address the major concerns in the following.
**Relationship between NEs of $\lambda$-regularized GMFGs and MFGs**
To build an NE of $\lambda$-regularized GMFGs from an NE of the constructed $\lambda$-regularized MFG, we take the position of the agent as a state in the MFG. This method is partially covered by [1], but we are the first to provide the mathematically strict proof of this method.
**Novelty of Step 3 in the Proof of Theorem 1**
Yes, the work [1] only considers the unregularized case in Step 3. In contrast, we cope with the regularized MFGs and prove Theorem 2. This requires us to prove the corresponding regularized game operator is closed, and we achieve this by constructing a set $A_{h}^{(n)}$ in Lines 721 and 722 and proving Proposition 8.
**Regularization Explanation**
Regularization is a standard technique in game theory. In GMFGs, it helps us to uniquely define the optimal policy of the MDP induced by any distribution flow. The regularization also makes sure that the optimal policy is not a degenerate distribution, and thus makes KL divergence an appropriate performance metric for us to analyze.
**Comparison With Zhan et al. 2023**
We would like to highlight that we consider a different problem as [2]. They analyze the performance of policy mirror descent in a fixed MDP. In contrast, we analyze the policy mirror descent in a non-stationary MDP, and the non-stationarity originates from that the reward functions change according to the distribution flows induced by the policies.
[1] K. Cui, and H. Koeppl. Learning graphon mean field games and approximate Nash equilibria. In International Conference on Learning Representations (2022).
[2] Zhan, Wenhao, et al. Policy mirror descent for regularized reinforcement learning: A generalized framework with linear convergence. SIAM Journal on Optimization 33.2 (2023): 1061-1091
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you very much! | Summary: The paper analyzes policy mirror descent for solving regularized GMFG. The results provide new guarantees for learning GMFG without stringent oracle assumptions, and unlike some past works, it does not restrict the results to continuous time analysis. Furthermore, the paper provides an analysis of the case of function approximation. Experimental results are also presented for certain GMFG.
Strengths: The paper provides convincing theoretical guarantees for the proposed algorithm. Table 1 is in general convincing of the theoretical contributions of the work. The analysed setting is novel and removes theoretical oracle assumptions in past work. Furthermore, the analysis is in discrete time (i.e., purely algorithmic) and not in continuous time dynamics.
The theoretical results are very clearly presented, and the assumptions are explicit. There is no ambiguity, and the proofs seem correct (although I might have missed details).
Furthermore, the incorporation of a function class in the analysis as opposed to oracle access is new in MFG/GMFG to the best of my knowledge.
Weaknesses: While the provided result for offline RL-based approximation of value functions is interesting theoretically, it might be prohibitive in practice as the results stand.
Table 1 seems to indicate a large variety of assumptions employed in literature. It is not directly clear how the setting compares to other alternative settings, for instance, if related, a comparison of weak monotonicity with other definitions of monotonicity as well as contraction. This makes it difficult to compare the theory.
Experimental results are restricted to toy problems, however, it is possible that no alternative benchmarks exist for GMFG.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How does the function class complexity $\mathcal{N}_\infty$ scale in general with various schemes of approximation? Several examples could illuminate the reader. Similarly, making the definition of the covering number in the main body explicit could help reading easier.
Line 230: The statement regarding switching policies is not clear to me. In general, is there an intuitive explanation of the weak monotonicity assumption?
How do the results compare to the specific case of MFG? The graphon structure seems to be more general than MFG, admitting it as a special GMFG with a particular graphon. For a clear comparison, it could be interesting to state the implied bounds (if any) for the special case of monotone MFG with the corresponding implication made clear. Otherwise a direct comparison might be difficult with continuous time results in MFG.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Potential additional limitations to be discussed were mentioned in the weaknesses and limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. We address the major concerns in the following.
**Empirical Action-Value Function Estimation and Simulation Benchmark**
Our work focuses on the optimization complexity and the sample complexity of the algorithms. The efficacy of our proposed algorithms, including the action-value function estimation, is corroborated in the simulation results. We note that there is no benchmark for GMFGs, and we follow the experiment settings in the previous works [1,2].
We note that Line 7 in Algorithm 2 may incur potentially computational cost for the action-value function estimation. However, this step has closed-form expressions for the tabular case, which includes a wide range of applications[3]. For the more complex environment, this step can be implemented via gradient descent. We leave the application of our algorithms, especially the action-value function estimation, on them as future work.
**Comparison of Different Monotone Conditions and Contraction**
We first provide a comparison between the different monotone conditions. Our work, [1], and [4] all consider the monotone condition for multi-population/graphon mean-field games. [1] defines the inequality in our Proposition 2 as the monotone condition. [4] defines the monotone condition for multi-population MFGs, and the monotone condition in our work can recover their definition by setting the graphons as block-wise constant graphons in Definition 3. [2] and [5] consider the monotone condition for MFGs. The definition of monotone condition in [5] is a special case of the definition in [4] by setting the number of populations as one. The definition in [2] is a special case of [5], where the reward functions are the sum of the distribution flow-independent part and the distribution flow-dependent part.
The literature of MFGs, a subclass of GMFGs, is split into two threads: the contraction condition and the monotone condition. We note that the comparison between the monotonicity and the contraction is open even for MFGs.
**Example of Covering Numbers**
We give some examples of the covering number of the function class. Consider a one-dimensional parametric function class $\mathcal{F} _{\mathrm{exp}}=\lbrace f _{\theta}:[0,1]\rightarrow\mathbb{R}\,|\,\theta\in[0,1]\rbrace$, where $f _{\theta}= 1-\exp(\theta x)$. The covering number is $\log \mathcal{N} _{\infty}(\delta,\mathcal{F} _{\mathrm{exp}})\asymp \log(1/\delta)$ as $\delta\rightarrow 0$ [6]. Consider a non-parametric function class $\mathcal{F} _{L}=\lbrace g:[0,1]\rightarrow\mathbb{R}\,|\, g(0)=0, |g(x)-g(y)|\leq L|x-y| \text{ for all }x,y\in[0,1]\rbrace$. The covering number is $\log \mathcal{N} _{\infty}(\delta,\mathcal{F} _{\mathrm{exp}})\asymp L/\delta$ as $\delta\rightarrow 0$ [6].
**Explanation of the Switching Policies Argument**
The statement regarding switching policies can be clarified through Proposition 2. Here we note that Proposition 2 states that for $\pi^{\mathcal{I}}$, $\tilde{\pi}^{\mathcal{I}}$ and the corresponding distribution flows $\mu^{\mathcal{I}}$,$\tilde{\mu}^{\mathcal{I}}$, implementing policies $\tilde{\pi}^{\mathcal{I}}$, $\pi^{\mathcal{I}}$ on the MDPs induced by $\mu^{\mathcal{I}}$, $\tilde{\mu}^{\mathcal{I}}$ will have higher rewards than implementing policies $\tilde{\pi}^{\mathcal{I}}$, $\pi^{\mathcal{I}}$ on the MDPs induced by themselves. Thus, switching policies can get higher rewards on at least one of the MDPs induced by $\mu^{\mathcal{I}}$ and $\tilde{\mu}^{\mathcal{I}}$.
**Comparision with MFG results**
We highlight that our results for GMFGs directly imply the results for MFGs when the graphons are constant graphons. For example, with constant graphons, Theorem 3 implies that the convergence rate of Algorithm 1 for MFGs is $\tilde{O}(T^{-1/2})$ in the sense of the KL divergence. Theorem 3 in [4] shows that with the additional potential structure, the fictitious play algorithm converges at rate $O(T^{-1/2})$ in the sense of exploitability. We can prove that the performance guarantee in KL divergence implies that in exploitability, and the resultant bound for our algorithm is $\mathrm{Exploit}(\hat{\pi})=\tilde{O}(T^{-1/4})$. Although the rate is slower that $O(T^{-1/2})$ in [4], we do not require the potential structure. The reason for an additional square root here is that we adopt Pinsker’s inequality, and this may be improved with a tighter distribution flow error analysis than the following analysis.
Here we specify the main proof ideas. For a policy $\pi$ that is close to NE policy $\pi^{\ast}$ in KL, we denote the distribution flow induced by $\pi$ as $\mu$ and denote the optimal policy on the MDP induced by $\mu$ as $\tilde{\pi}^{\ast}$. We aim to prove that the difference between the cumulative rewards of $\tilde{\pi}^{\ast}$ and $\pi$ on $\mu$ is bounded. This can be achieved by noting that: 1. The performance difference lemma can bound this with the Total Variation (TV) between $\pi$ and $\tilde{\pi}^{\ast}$. 2. The TV between $\tilde{\pi}^{\ast}$ and $\pi^{\ast}$ is proportional to the TV between $\mu$ and $\mu^{\ast}$, since they are the optimal policies on the MDPs induced by $\mu$ and $\mu^{\ast}$. 3. $\mu$ and $\mu^{\ast}$ are close since they are induced by $\pi$ and $\pi^{\ast}$.
[1] C. Fabian, K. Cui, and H. Koeppl. Learning sparse graphon mean field games. Aistats PMLR, 2023.
[2] S. Perrin, et al. Fictitious play for mean field games: Continuous time analysis and applications. Neurips 33 (2020).
[3] A. Aurell, et al. "Finite state graphon games with applications to epidemics." Dynamic Games and Applications 12.1 (2022).
[4] J. Perolat, et al. Scaling up mean field games with online mirror descent. arXiv:2103.00623(2021).
[5] M. Geist, et al. Concave utility reinforcement learning: the mean-field game viewpoint. arXiv:2106.03787(2021).
[6] M. J. Wainwright. High-dimensional statistics: A non-asymptotic viewpoint. Cambridge university press, 2019.
---
Rebuttal Comment 1.1:
Comment: Thank you for the very detailed rebuttals. All of my questions have been answered, and I will leave my rating as a 7. | Summary: Intuitively, a "graphon mean field game" (GMFG) describes the large-$N$ limit of a game with $N$ players, where the payoff of a player $i$ depends on a weighted average of the states of other players $j\in [N]$. The graphon aspect comes from the fact that players have "identities" given by numbers $U_i\in [0,1]$, and the weights in the averages are of the form $W(U_i,U_j)/N$, where $W$ is a suitable function (if $W$ is constant, we have a regular mean field game MFG). GMFGs are potentially useful in multiagent reinforcement learning whenever agent interactions are not too strong.
The present paper does not deal with the finite-$N$ problem, but rather with its continuous limit. One important point is that it considers regularized versions of GMFGs, with an added penalization term. The main results are as follows.
* Theorem 1 is a new result on the existence of equilibria for (potentially regularized) GMFGs. The main attraction of this result, in comparison with previous work, is that it only makes continuity assumptions on the reward function and transition probabilities (the function $W$ is still assumed Lipschitz). Moreover, the regularization had not been considered previously. Theorem 1 is obtained via a careful reduction to a non-graphon Mean Field Game, for which the authors also prove a new existence result (Theorem 2).
* The paper then considers algorithms for approximating Nash equilibria for GMFGs that satisfy a monotonicity condition. Theorem 3 obtains a result under the existence of an "action function oracle". When that is not available, one can resort to function approximation: this leads to Theorem 4, which works under additional assumptions (and require the regularization).
A small set of experiments suggests that the authors' method performs well in practice, and also that regularization is important to achieve good performance.
Strengths: The paper is original and significant. It is also quite clear. The following two points stand out.
* The existence results work under weak conditions because of the clever use of "soft" techniques.
* The algorithms work under relatively natural conditions.
Weaknesses: * It is not clear to me if the Lipschitz assumption on $W$ is really needed.
* The proof of Theorem 4 is not particularly surprising. (I hesitate to call this a "weakness", but it is true that this part of the analysis is not too surprising.)
* In certain settings the user may be interested in the unregularized GMFG; however, Theorem 4 requires nonzero regularization. The paper does not provide bounds on how close a (near-)NE for the regularized case is to being a NE for the unregularized game.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: These are simply rewordings of my comments on weaknesses.
Q1: Regarding Theorems 1 and 2, it looks like the only place where the Lipschitz assumption on $W$ is used is (D.5). However, it looks that all that is needed is uniform continuity, which is equivalent to continuity.
Q2: In certain settings the user may be interested in the unregularized GMFG; however, Theorem 4 requires nonzero regularization. Can one prove bounds on how close a (near-)NE for the regularized case is to being a NE for the unregularized game?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: None were discussed, but I don't think there was any need to do so.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. We address the major concerns in the following.
**Lipschitz Continuity of $W$**
We thank the reviewer for this suggestion. For the proof of Theorem 1, the Liphschitz assumption on $W$ is only used to establish (D.5), and this can be proved with continuity assumption. We will modify this in the revised version.
**Novelty of Theorem 4**
The learning error bound in Theorem 4 mainly consists of two terms: the optimization error and the estimation error. We would like to discuss them separately.
First, the optimization error analysis in Theorem 4 is a direct adaption of Theorem 3. This is the first analysis of the discrete-time algorithm only under the monotone condition even in the Mean-Field Games (MFG) problem, a subclass of Graphon Mean-Field Games (GMFG) with constant graphons. Here we make use of the intuition provided by Proposition 2 to guide the analysis of the policy mirror descent in a non-stationary environment.
Second, the estimation error is analyzed with the general function approximation and provides the error dependency on the number of episodes $K$ and the number of sample agents $N$. This analysis is important for the realistic application of the algorithm, where the algorithm needs to estimate the action-value function from samples. Our results provide the guidance to choose the parameters $T$, $N$, and $K$ to achieve a learning error $\varepsilon$.
**Regularization Explanation**
First, we note that the regularized setting itself is important in the MFGs, a subclass of GMFGs. In the real-world applications, there are usually constraints about the safety and the incentives for the exploration in the learning process, and people formulate these requirements via regularizations. The regularized setting is widely studied in MFGs [1,2,3]. Our work follows this thread and consider learning NE for the regularized GMFGs.
Second, we would like to quantify the difference between the NEs of the regularized GMFGs and the unregularized GMFGs. Here, we denote the NE of the $\lambda$-regularized one as $(\pi^{\ast,\mathcal{I}},\mu^{\ast,\mathcal{I}})$ and denote the optimal policy of the unregularized MDP induced by $\mu^{\ast,\mathcal{I}}$ as $\pi^{\mathcal{I}}$. Now we examine how far $(\pi^{\ast,\mathcal{I}},\mu^{\ast,\mathcal{I}})$ is from the NE of the unregularized GMFGs. We note that $\mu^{\ast,\mathcal{I}}$ is induced by the policy $\pi^{\ast,\mathcal{I}}$, and thus they satisfy the distribution consistency condition. For the player rationality condition, Theorem 2 in [4] shows that $V_{1}^{0,\alpha}(s,\pi^{\alpha},\mu^{\ast,\mathcal{I}})\leq V_{1}^{\lambda,\alpha}(s,\pi^{\alpha},\mu^{\ast,\mathcal{I}})+\lambda H\log|\mathcal{A}|$ for all $\alpha\in[0,1]$ and $s\in\mathcal{S}$. Therefore, $(\pi^{\ast,\mathcal{I}},\mu^{\ast,\mathcal{I}})$ satisfies the player rationality condition up to $\lambda H\log|\mathcal{A}|$. We highlight that this gap also appears in the MFG works [1,2]. Our work focuses on learning the regularized GMFGs and leaves easing this gap as the future work.
[1] B. Anahtarci, C. D. Kariksiz, and N. Saldi. Q-learning in regularized mean-field games. Dynamic Games and Applications 13.1: 89-117 (2023).
[2] Q. Xie, Z. Yang, Z. Wang, and A. Minca. Learning while playing in mean-field games: Convergence and optimality. In International Conference on Machine Learning (pp. 11436-11447). PMLR (2021).
[3] B. Yardim, S. Cayci, M. Geist, N. He. Policy mirror ascent for efficient and independent learning in mean field games. In International Conference on Machine Learning (pp. 39722-39754). PMLR (2023).
[4] M. Geist, B. Scherrer, and O. Pietquin. A theory of regularized markov decision processes. International Conference on Machine Learning. PMLR, (2019).
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: I thank the authors for their rebuttal. I remain favorable to the paper being accepted. | Summary: This paper studies regularized Graphon Mean-Field Games (GMFGs). They make two theoretical improvements over previous works on this topic:
* They prove existence of Nash equilibrium under weaker assumptions (e.g., weaker requirement on the continuity of the game) than previous works.
* For the special case of monotone regularized GMFGs, they give an mirror-descent algorithm that learns the unique Nash equilibrium. Compared to previous works, the novelty here is that the algorithm works for regularized games in discrete time (as opposed to unregularized games and continuous time).
In terms of techniques, their first result follows the proof plan of Cui and Koeppl, [2021] that reduces the problem to proving existence of Nash equilibrium for a subclass of GMFGs called MFGs. Their main technical contribution is proving an equilibrium existence result (Theorem 2) for MFGs under weaker assumptions using a different approach than previous works. Their second result essentially adapts the algorithm from Perolat et al. [2021] to their setting.
Strengths: This paper builds upon previous works and makes reasonable improvements. Most interestingly, the condition of their equilibrium existence result (Theorem 2) seems to be significantly weaker than previous works. The paper is well-written. They did a great job introducing the problem and the results and explaining the techniques and the difference from previous works.
Weaknesses:
As someone who is not closely following this line of work, it is hard for me to gauge the significance of the new equilibrium existence result, i.e., whether the weakened assumption is significantly more applicable than the assumptions made in previous works. The paper briefly mentions the assumptions in previous works are ``overly restrictive for real-world applications'' but did not provide any concrete example.
The algorithm for learning Nash equilibrium in their setting seems to be rather straightforward adaptation (with more careful analysis) from previous work of Perolat et al. [2021].
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Could you elaborate why the new equilibrium existence results are more applicable than previous works with concrete examples?
What are the concrete benefits of the exploration step (Line 5 of Algorithm 1) in your learning algorithm? Is it necessary for proving your result?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. We address the major concerns in the following.
**Detailed Comparison of NE Existence Conditions**
Theorems 1 and 2 in our work, Proposition 3 in [1], and Proposition 3 in [2] all derive the existence of Nash Equilibrium (NE) in the regularized Mean-Field Games (MFG) or Graphon Mean-Field Games (GMFG). Our results in Theorems 1 and 2 hold for continuous reward functions, weakly continuous transition kernel, Lipschitz graphons, and any value of regularization parameter $\lambda\geq 0$. In contrast, Proposition 3 in [1] proves the existence of NE via contraction, which requires that $K_{H}<1$ therein. From the definition of $K_{H_1}$ and $K_{1}$, the condition $K_{H}<1$ requires that the Lipschitz constants of transition kernels, and reward functions to be sufficiently small. Proposition 3 in [2] requires that the regularization parameter $\lambda$ should be sufficiently larger than the Lispchitz constants of the game operators. These conditions are restrictive for real-world applications. In addition, our results hold for compact state space, but [2] is constrained to the setting with finite state space.
Theorem 1 in our work, Theorem 1 in [3], and Theorem 1 in [4] all consider the existence of NE in GMFGs. However, [3] and [4] only prove the existence of NE in the unregularized case under the Lipschitz continuity of the transition kernels and reward functions. In contrast, our work provides the results for all $\lambda\geq 0$, including both the regularized and unregularized cases, under the continuity of reward functions and the weak continuity of transition kernels.
**Algorithm Comparison With Perolat et al. [2021]**
We would like to highlight three main differences between our algorithm and the algorithm in [5].
First, our provably efficient algorithm is discrete-time, while the one in [5] is continuous-time. In the realistic application of the algorithm, the optimization can only be implemented in a discrete-time manner, which introduces additional quantization error for the one in [5]. In contrast, our discrete-time algorithm does not suffer from this additional error.
Second, our optimization algorithm does not assume that we have access to the nominal value of the action-value functions, while [5] only analyzes the algorithm with the true action-value functions. The analysis of the algorithm with action-value function estimates requires a perturbation analysis of the algorithm, which is important for applications to realistic scenarios.
Finally, our algorithm has an additional exploration procedure that is not included in [5]. Intuitively, this procedure guarantees that the support of the NE policy is contained in that of the policy estimate in each step, which accommodates the potential error originating from the action-value function estimation and the quantization. Technically, this guarantees that the KL divergence between the NE policy and the policy estimate in each iteration is finite, as shown in (F.1) in Appendix.
[1] B. Anahtarci, C. D. Kariksiz, and N. Saldi. Q-learning in regularized mean-field games. Dynamic Games and Applications 13.1 (2023): 89-117.
[2] K. Cui, and H. Koeppl. Approximately solving mean field games via entropy-regularized deep reinforcement learning. International Conference on Artificial Intelligence and Statistics. PMLR, 2021.
[3] K. Cui, and H. Koeppl. Learning graphon mean field games and approximate Nash equilibria. In International Conference on Learning Representations (2022).
[4] F. Christian, K. Cui, and H. Koeppl. Learning sparse graphon mean field games. International Conference on Artificial Intelligence and Statistics. PMLR, 2023.
[5] J. Perolat, et al. "Scaling up mean field games with online mirror descent." arXiv preprint arXiv:2103.00623 (2021).
---
Rebuttal Comment 1.1:
Comment: My question was could you give some concrete examples for your claim `` These conditions are restrictive for real-world applications'' but the conditions of your existence theorem are not restrictive? This is supposed to be the main motivation of this paper, but the paper did not mention any concrete example of such applications. (I understand your existence theorem requires weaker assumptions in many aspects theoretically.)
---
Reply to Comment 1.1.1:
Title: Response to Reviewer eGqz (Part 1/2)
Comment: Thanks for the reply. We would like to give some concrete examples of the existence of NE in MFGs. The results in [1] can be easily generalized to the finite-horizon MDP with undiscounted rewards for comparison. It requires the reward functions and the transition kernels are Lipschitz functions.
\begin{align*}
|r _{h}(s _{h},a _{h},\mu _{h}) - r _{h}(s _{h}^{\prime},a _{h}^{\prime},\mu _{h}^{\prime})|&\leq L _1\cdot(\mathbb{I}\lbrace s _h\neq s _h^{\prime}\rbrace +2\mathbb{I}\lbrace a _h\neq a _h^{\prime}\rbrace+\Vert \mu _h-\mu _h^{\prime}\Vert _1)\\\\
\Vert P _{h}(\cdot| s _{h}, a _{h},\mu _h)-P _{h}(\cdot| s _h^{\prime}, a _h^{\prime},\mu _h^{\prime})\Vert _{1}&\leq K _1\cdot(\mathbb{I}\lbrace s _h\neq s _h^{\prime}\rbrace +2\mathbb{I}\lbrace a _h\neq a _h^{\prime}\rbrace+\Vert\mu _h-\mu _h^{\prime}\Vert _1)
\end{align*}
for all $h\in[H]$. Then we define the $Q _{\mathrm{Lip}}$ as $Q _{\mathrm{Lip}}=L _1(1-(K _1/2)^H)/(1-K _1/2)$. Then the contraction constant in Proposition 3 in [1] for finite-horizon MFG with undiscounted rewards is
\begin{align*}
C = 3K _1+\frac{5}{\lambda}\bigg(L _1+\frac{K _1 Q _{\mathrm{Lip}}}{2}\bigg).
\end{align*}
The parameters $K _1,L _1, \lambda$ should guarantee that $C<1$, i.e., $K _1<1/3$, and $\lambda>5(L _1+ K _1Q _{\mathrm{Lip}}/2)/(1-3K _1)$.
* We consider the suspect susceptible–infected–susceptible(SIS) problem in [2], which is a simplified version of the Susceptible-InfectedRemoved (SIR) problem in [3]. The state space contains the susceptible state $S$ and the infected state $I$, i.e., $\mathcal{S}=\lbrace S,I\rbrace$, and the action space contains the going out $U$ and keeping distance $D$, i.e., $\mathcal{A}=\lbrace U,D \rbrace$. The reward function is defined as $r _{h}(s _{h},a _{h},\mu _{h})=-r _{I}\mathbb{I}\lbrace s _h=I\rbrace –r _{D} \mathbb{I}\lbrace a _h=D\rbrace$, and the transition kernel is defined via
\begin{align*}
P _{h}(s _{h+1}=I | s _h=S, a _h=U,\mu _h) &= P _{a} \cdot \mu _{h}(I)\\\\
P _{h}( s _{h+1}=S| s _h=I,\cdot,\cdot) &= P _{r}\\\\
P _{h}( s _{h+1}=I| s _h=I, a _h=D,\cdot) &= 0
\end{align*}
for all $h\in[H-1]$. The first equation indicates that an outside susceptible person will be infected with probability proportional to the ratio of infected people. The parameters of this problem are $r _I,r _D,P _a,P _r>0$. We can show that
\begin{align*}
|r _{h}(s _{h},a _{h},\mu _{h})-r _{h}(s _{h}^{\prime},a _{h}^{\prime},\mu _{h})^{\prime}|&\leq \max\lbrace r _{I},r _D/2\rbrace\cdot(\mathbb{I}\lbrace s _h\neq s _h^{\prime}\rbrace +2\mathbb{I}\lbrace a _h\neq a _h^{\prime}\rbrace+\Vert\mu _h-\mu _h^{\prime}\Vert _1)\\\\
\Vert P _{h}(\cdot| s _h, a _h,\mu _h)-P _{h}(\cdot| s _h^{\prime}, a _h^{\prime},\mu _h^{\prime})\Vert _{1}&\leq \max\lbrace 1-P _r,P _a\rbrace\cdot(\mathbb{I}\lbrace s _h\neq s _h^{\prime}\rbrace +2\mathbb{I}\lbrace a _h\neq a _h^{\prime}\rbrace+\Vert\mu _h-\mu _h^{\prime}\Vert _1)
\end{align*}
To guarantee that $C<1$, we require that $P _r>2/3, P _a<1/3$ and $\lambda$ is larger enough than the rewards $r _{I}, r _D$, which restricts the regularization, the infection and recovery probability of each agent. The results in our work do not impose this constraint.
* We consider the linear quadratic MFG in [4]. The state space is $\mathcal{S}=\lbrace -L,\cdots,L\rbrace$, and the action space is $\mathcal{A}=\lbrace -M,\cdots,M\rbrace$. The reward function and the transition kernel are defined as
\begin{align*}
r _{h}(s _{h},a _{h},\mu _{h})&= -\frac{1}{2}a _h^2+qa _h(m _h-s _h)-\frac{\kappa}{2}(m _h-s _h)^2\\\\
s _{h+1}&= \mathrm{Discretize}[s _h+ a _h + K(m _h-s _h)+\sigma\varepsilon _h]
\end{align*}
for $h\in[H]$, where $q,\kappa,K>0$ are the parameters of the game, $m _h=\sum _{s\in\mathcal{S}}s\cdot\mu _h(s)$ is the first moment of the state distribution, $\varepsilon\sim\mathcal{N}(0,1)$ is the noise, and $\mathrm{Discretize}[\cdot]$ operator discretizes a state into the closed state in $\mathcal{S}$. With the data processing inequality and some basic calculations, we can show that
\begin{align*}
|r _{h}(s _{h},a _{h},\mu _{h})-r _{h}(s _{h}^{\prime},a _{h}^{\prime},\mu _{h})^{\prime}|&\leq \max\lbrace (M+2qL)^2/4,L(3\kappa L+2qM)\rbrace\cdot(\mathbb{I}\lbrace s _h\neq s _h^{\prime}\rbrace +2\mathbb{I}\lbrace a _h\neq a _h^{\prime}\rbrace+\Vert\mu _h-\mu _h^{\prime}\Vert _1)\\\\
\Vert P _{h}(\cdot| s _h, a _h,\mu _h)-P _{h}(\cdot| s _h^{\prime}, a _h^{\prime},\mu _h^{\prime})\Vert _{1}&\leq \frac{\max\lbrace |1-K|,1/2,2LK\rbrace}{2\sigma}\cdot(\mathbb{I}\lbrace s _h\neq s _h^{\prime}\rbrace + 2\mathbb{I}\lbrace a _h\neq a _h^{\prime}\rbrace+\Vert\mu _h-\mu _h^{\prime}\Vert _1).
\end{align*} | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
An Inductive Bias for Tabular Deep Learning | Accept (poster) | Summary: The authors address the problem of fitting deep nets to tabular datasets. This is a challenging task due to the heterogeneity of tabular datasets. Following a recent work, the authors first demonstrate that tabular data require learning prediction functions with nonnegligible high-frequency components. Since deep nets have an inductive bias towards learning a low frequency, they may struggle in modeling prediction functions over tabular data. To solve this limitation, the authors evaluate several transformations applied to the input features designed to reduce the relevance of high-frequency components of the prediction function. Finally, the authors present an optimization to minimize the risk of the model while transforming the input features using a convex combination of predefined transformations. The merits of the new approach are demonstrated using several real-world datasets.
Strengths: Progressing the capabilities of deep nets on tabular data is of high importance for the ML community. The paper provides value both in terms of the empirical evidence on the inductive bias and the practical solution presented by the authors. The paper is well-written and easy to follow, and the motivation and solution are intuitive and clear.
Weaknesses: My biggest concern is the empirical evaluations conducted in the paper. It is focused on relatively low dimensional datasets, and with a large sample size, this regime is typically less challenging for NN than the high dimensional low sample size (see [1]). More importantly, the evaluation is performed using normalized accuracy (and AUC) in the main text; the reason for using these metrics is not explained in the main text. In the supplemental, the authors use the more standard un normalized metrics, which demonstrate that the differences between most normalization schemes is subtle. It is unclear why the authors use normalized in the main text and unnormalized in the supplemental material; this should be clarified. The comparison to other tree-based models should be moved to the main text, and the authors should use one metric. Do the tree-based methods significantly outperform the proposed approach using the normalized metric?
The comparison presented in the paper is limited to some normalizations and does not include other tree-based models or architectures recently proposed for tabular data, for example:
[1] Yang et al. "Locally sparse neural networks for tabular biomedical data." In International Conference on Machine Learning, pp. 25123-25153. PMLR, 2022.
[2] Ke, Guolin, et al. "TabNN: A universal neural network solution for tabular data." (2018).
Furthermore, additional transformations proposed for tabular data should be included.
[3] Alexander, Yotam, et al. "What Makes Data Suitable for a Locally Connected Neural Network? A Necessary and Sufficient Condition Based on Quantum Entanglement." arXiv preprint arXiv:2303.11249 (2023).
[4] Zhu, Yitan, et al. "Converting tabular data into images for deep learning with convolutional neural networks." Scientific reports 11.1 (2021): 11325.
I would also include trees in the main text.
I have a concern regarding the preprocessing normalization performed in the paper; how is it performed with respect to train/test splits? The authors do not detail this. Unproper normalization could lead to bias in classification results, as demonstrated in:
Moscovitch and Rosset, "On the Cross-Validation Bias due to Unsupervised Preprocessing", Journal of the Royal Statistical Society Series B: Statistical Methodology
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How is the convex combination of rank and scale significantly outperforming both of them?
Notations are quite confusing, for instance n is the dimension and N is the number of samples, why not use a different letter for the dimension?
Also the notation in page 6 is confusing, both upper case and lower cases are used to indicate the index of a vector.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations of the method are not discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments. Please see our responses below.
**Comment 1: “My biggest concern is the empirical evaluations conducted in the paper. It is focused on relatively low dimensional datasets, and with a large sample size, this regime is typically less challenging for NN than the high dimensional low sample size (see [1]). ”**
The motivation behind our dataset choices is discussed in the global response (Choice of Datasets). We agree with the Reviewer that high-dimensional low-sample-size datasets would be good additions for extensive benchmarking against tree-based approaches. They are explicitly excluded from the benchmarking study of Grinsztajn et al, from which our datasets were selected. On the other hand, we believe that our experiments with the current datasets are sufficiently representative to validate our claims (i.e., frequency reduction as inductive bias) as it can be observed that the proposed approach significantly improves the performance of other NNs.
**Comment 2: “More importantly, the evaluation is performed using normalized accuracy (and AUC) in the main text; the reason for using these metrics is not explained in the main text. In the supplemental, the authors use the more standard un normalized metrics, which demonstrate that the differences between most normalization schemes is subtle. It is unclear why the authors use normalized in the main text and unnormalized in the supplemental material; this should be clarified. ”**
We agree with the reviewer that a discussion in the main text to motivate the use of normalized metrics would help the reader, and we will be adding this to camera ready version if accepted.
With our experiments, we observed that the performance of baseline approaches highly fluctuated across different datasets. This implies that finding a suitable approach to used with a given dataset requires exhaustive search, and existing approaches do not generalize well across different tabular datasets. On the other hand, our proposed method consistently performed among the best. Based on these observations, we decided use the normalized metrics in order to aggregate measurements across datasets (since the ranges of measurements are different) and convey information on the overall behavior (i.e., performance , convergence speed, frequencies) of different approaches. Notably, this approach is also used in other studies [1, 2]. We provide the raw measurements in Appendix as they do not convey information on overall behavior and need to be broken down by dataset (hence take larger space).
Please also see our global response (Additional Synthetic Data Experiments).
[1] Wistuba, M., Schilling, N. and Schmidt-Thieme, L., 2015, October. Learning hyperparameter optimization initializations. In 2015 IEEE international conference on data science and advanced analytics (DSAA) (pp. 1-10). IEEE.
[2] Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M. and Hutter, F., 2022. Auto-sklearn 2.0: Hands-free automl via meta-learning. The Journal of Machine Learning Research, 23(1), pp.11936-11996.
**Comment 3: “The comparison to other tree-based models should be moved to the main text, and the authors should use one metric. Do the tree-based methods significantly outperform the proposed approach using the normalized metric? ”**
Yes - tree-based models are still the highest-performing overall. However, our proposed method closes the gap significantly.
**Comment 4: “The comparison presented in the paper is limited to some normalizations and does not include other tree-based models or architectures recently proposed for tabular data, for example...”**
We thank the reviewer for pointing out these interesting studies. Please see our global response (Focus of our work and its impact on the experiment design) for a detailed discussion on this matter.
**Comment 5: “I have a concern regarding the preprocessing normalization performed in the paper; how is it performed with respect to train/test splits?...”**
For each cross-validation fold (i.e., train/validation/test splits under different random seeds), normalizers are re-fit (i.e., to collect statistics) from training set only. Then, they are used to transform validation and test splits.
**Comment 6: “How is the convex combination of rank and scale significantly outperforming both of them?”**
We observe that although rank and scale reduce frequency, they may impact NN performance negatively as well (Section 3.3/line 224). For example, ranking may lose relevant information when the relative distances among pairs of points are important, and scaling may impact the conditioning of the loss landscape (i.e., hindering the optimization). The convex combination can be seen as a training loss-driven dial that lets the NN to learn the amount of ranking and scaling that should be conducted to find a low-frequency mapping that does not suffer as much from the negative impacts of the individual transformations.
**Comment 7: “Notations are quite confusing, for instance n is the dimension and N is the number of samples, why not use a different letter for the dimension? Also the notation in page 6 is confusing, both upper case and lower cases are used to indicate the index of a vector.”**
We thank the reviewer for this feedback. If accepted, we will work on simplifying the notation.
**Comment 8: “Limitations: The limitations of the method are not discussed in the paper.”**
Please see Appendix F for an extensive discussion of the limitations.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I thank the authors for responding to all my comments. My concerns have been properly addressed. I keep my score unchanged. | Summary: the paper “An Inductive Bias for Tabular Deep Learning” presents an interesting exploration of inductive biases for deep learning applied to tabular data. The paper introduces a novel inductive bias, named frequency reduction, which is specifically designed for tabular data. The authors propose a novel approach that leverages domain knowledge to improve the performance of models on tabular datasets.
Strengths: Strengths:
1. Novel approach: The paper introduces a novel inductive bias, named frequency reduction, which is specifically designed for tabular data. By incorporating domain knowledge into the learning process, this approach offers a unique perspective on improving the performance of models for tabular datasets. This contribution adds to the existing literature by presenting a fresh perspective on the application of inductive biases. The idea is inspiring to the community.
2. Rigorous evaluation: The authors provide a comprehensive evaluation of their proposed approach using various benchmark datasets. They compare the performance of their method against existing state-of-the-art techniques for tabular data. The evaluation metrics used are well-established and allow for a fair comparison. The empirical results provided demonstrate the effectiveness and superiority of the proposed approach.
3. Clear motivation: The paper effectively communicates the motivation behind the proposed approach and the reasons for its effectiveness on tabular datasets. The use of visual aids and examples further enhances the clarity of presentation.
Weaknesses: Weaknesses:
1. Lack of theoretical analysis: The paper lacks a deeper theoretical analysis of the proposed inductive bias. While the empirical results are convincing, a more thorough theoretical explanation of why and how the approach works would enhance the paper’s contribution. Including a theoretical analysis could provide more insights into the underlying mechanisms and generalization capabilities of the proposed approach.
2. Dataset limitations: The paper primarily focuses on benchmark tabular datasets. However, real-world tabular datasets often exhibit diverse properties and complexities that may not be fully captured by the selected benchmarks. Including additional, more diverse datasets in the evaluation would provide a more comprehensive understanding of the generalizability of the proposed approach.
3. Writing quality: The paper is not well-structured. The section of experiments is too short while most experiment results are included in the appendix.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Lack of theoretical analysis: The paper lacks a deeper theoretical analysis of the proposed inductive bias. While the empirical results are convincing, a more thorough theoretical explanation of why and how the approach works would enhance the paper’s contribution. Including a theoretical analysis could provide more insights into the underlying mechanisms and generalization capabilities of the proposed approach.
2. Dataset limitations: The paper primarily focuses on benchmark tabular datasets. However, real-world tabular datasets often exhibit diverse properties and complexities that may not be fully captured by the selected benchmarks. Including additional, more diverse datasets in the evaluation would provide a more comprehensive understanding of the generalizability of the proposed approach.
3. Writing quality: The paper is not well-structured. The section of experiments is too short while most experiment results are included in the appendix.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment 1: “Lack of theoretical analysis: The paper lacks a deeper theoretical analysis of the proposed inductive bias. While the empirical results are convincing, a more thorough theoretical explanation of why and how the approach works would enhance the paper’s contribution. Including a theoretical analysis could provide more insights into the underlying mechanisms and generalization capabilities of the proposed approach.”**
We agree with the reviewer that our study could have benefited from a more thorough theoretical exploration of our inductive bias. In response, we briefly outline a theoretical argument that illustrates the underlying mechanism for improving NN performance on tabular data via frequency reduction.
Theorem 1 of Rahaman et al [1] provides the analytic form of the Fourier amplitudes of a general ReLU network, $f(\mathbf{x})$. The authors also show that along each direction of $\mathbf{k}$ space, these amplitudes are upper bounded as
$|\tilde{f}_\theta(\bf{k})| \leq N_f L_f(\theta) k^{-\Delta-1}$.
Here, $L_f$ is the Lipschitz constant of the NN for a given set of parameters $\theta$, $N_f$ is the number of linear regions, and $1\leq \Delta \leq d$ depends on the orientation of $\mathbf{k}$ with respect to the polytope faces represented by the NN. In any realistic setting, there is a maximum $N_f L_f$ that can be achieved through training, and therefore the amplitude of the NN Fourier coefficient for a fixed $\mathbf{k}$ is bounded from above. For a given $\mathbf{k}$ direction, we can therefore define a high-frequency region, $\Omega$, in which the target function Fourier amplitudes, $\widetilde{y}(\mathbf{k})$, cannot be fit by the neural network. Reducing the $L^2$ norm of the target function Fourier amplitude over $\Omega$, $\int_{\Omega} |\widetilde{y}(\mathbf{k})|^2 d\mathbf{k}$, relative to the corresponding integral over $\mathbb{R}^n$ will tend to reduce the corresponding error arising from this spectral bias when evaluated on a particular set of data points (assuming the target function Fourier amplitudes are square-integrable).
It is straightforward to show that applying our $\mathrm{scale}$ transformation with scale factor $a>1$ directly leads to this reduction of spectral energy over $\Omega$, since the Fourier amplitudes for a function $g(\mathbf{x})$ are related to those for the corresponding function, $g_{\rm scaled}(\mathbf{x})$ acting on scaled inputs as $\widetilde{g}(\mathbf{k}) = 1/a \times \widetilde{g}_{\rm scaled}(\mathbf{k}/a)$. This relationship shows directly that $\mathrm{scale}$ with $a>1$ maps a given Fourier component of the original function to a component at reduced frequency (and reduced overall magnitude) after applying the scaling transformation. The corresponding argument for $\mathrm{rank}$ further depends on the underlying distribution of the feature. Consider the simple example of a uniformly-distributed feature $x$ over an interval $[x_1,x_2]$. Then $\mathrm{rank}(x)$ acts in precisely the same way as $\mathrm{scale}$ with $a=1/(x_2-x_1)$, and the same effect is seen. Similar arguments can be made with other assumptions about the underlying distribution. This analysis illustrates how the transformations we consider can mitigate the impact of NN spectral bias, however they neglect the corresponding impact on the optimization process itself. In practice, these effects are important and motivated the learnable convex combination of $\mathrm{rank}$ and $\mathrm{scale}$ we propose, as it is implicitly regularlized by the loss function itself. We plan to analyze the corresponding training dynamics analytically in future work, however its observed behavior is empirically consistent with our claims both on synthetic (see also results above) and real-world data experiments.
**Comment 2: “Dataset limitations: The paper primarily focuses on benchmark tabular datasets. However, real-world tabular datasets often exhibit diverse properties and complexities that may not be fully captured by the selected benchmarks. Including additional, more diverse datasets in the evaluation would provide a more comprehensive understanding of the generalizability of the proposed approach.”**
With regards to dataset limitations, please see our general comments to all reviewers, clarifying the nature of the various real-world datasets we considered and highlighting their diversity.
**Comment 3: “Writing quality: The paper is not well-structured. The section of experiments is too short while most experiment results are included in the appendix.”**
We thank the reviewer for this suggestion. If accepted, we will work on rebalancing the content to include more experiments on the main body for camera ready.
[1]Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. In International Conference on Machine Learning, pages 5301–5310. PMLR, 2019.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I read the rebuttal and thank the authors for their detailed replies. My concerns have been well addressed. The heterogeneity of tabular datasets is an obvious property and the selection of dataset is important. I think the choice of datasets in rebuttal can convince me. Considering that the idea and story are both insightful for the community, I change my score to 6. | Summary: # Summary
The paper introduces a hypothesis that tabular datasets are best described by functions with high frequency. They connect this finding to existing empirical knowledge in the tabular space, and introduce formal tools to measure spectral properties of target functions in tabular data. The authors propose a simple novel neural network layer to transform tabular datasets such that they are better fit by neural networks, reducing the frequency of the function required to fit these datasets. The authors present results on a suite of datasets.
Overall, the paper is well-written and provides what seems to be a "missing link" in understanding performance gaps between neural and non-neura methods in the tabular space. The main contribution of the work seems to be the introduction of formal tools for analyzing spectral bias in tabular classifiers and their application to real-world datasets. The empirical results are more limited, but definitely encouraging. I suggest some minor revisions to the paper and a possible addition of a more controlled exeriment in the form of simulated-data experiments but believe that there is a solid case for acceptance.
# Major Comments
* A major limitation of the empirical results in the experiments section is that the authors only show results on an assortment of tabular datasets that are themselves heterogeneous and have a number of impossible to detect differences. As such, it is hard to really assess whether the proposed method is doing what it claims to do, even though it does lead to some minor but nontrivial gains in performance. I would suggest that the authors conduct a more controlled study, possibly on synthetic data, to demonstrate how the NNs with the proposed modifications perform on a dataset constructed to have a specific form of spectral bias. I am not sure of the exact design of such an experiment, but believe that, if desgned correctly, it would make a stronger and more direct case for the authors' claimed mechanisms than the current results (better accuracy and faster convergence) do.
* The "discussion" section at the end of the related work is a nice synthesis of the state of the field of tabular machine learning. It also makes some helpful and pointed claims about adapting methods to data modalities, and how tabular researchers often "trade" complexity and hyperparameter tuning for performance gains, which provide useful guideposts for readers.
* Table 1 is helpful, but a bit more information about the NN model used would be helpful. Additionally, since a major axis of comparison here would be neural vs. non-neural (i.e. GBM) methods, it would be useful to add a comparison with XGBoost to this table (I expect that the transformations would have no effect on such a model, again making a clearer case). This could also go in the supplement.
* Despite a persuasive thesis and some data analysis that seems to support it, the empirical results are more mixed than expected (perhaps part of this is poor data visualization; the "whiskers" in the plot are extremely wide and it would be more helpful to display all of the data, ideally separated by dataset). I wonder whether the authors could comment on why the empirical impacts seem to vary so much. Could one factor be that the proposed method ignores feature interactions?
# Minor Comments
* The empirical experiments are not sufficiently described in the main text; at the very least, it would be useful for the authors to clearly list which datasets are used in the experiments.
* The main takeaways from Figure 3 are not entirely clear in the main text. IIUC the takeaway is that the CI bars are narrower in the center and right columns? For each "conclusion" the authors make about Figure 3, it would be helpful to explain how/where this is demonstrated in the figure. Also, why is there a feature with such large variance and instability in the rightmost column?
# Typos etc.
L325: "the first two rows" --> the first two columns
Strengths: See above.
Weaknesses: See above.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments. Please see our responses below.
**Comment 1: “A major limitation of the empirical results in the experiments section is that the authors only show results on an assortment of tabular datasets that are themselves heterogeneous and have a number of impossible to detect differences. As such, it is hard to really assess whether the proposed method is doing what it claims to do, even though it does lead to some minor but nontrivial gains in performance...”**
We agree with the Reviewer that relying heavily on empirical results generated from a heterogeneous collection of datasets makes it more difficult to isolate the effects of frequency reduction as inductive bias. To address this limitation, we have extended the results of our synthetic data experiment, showing results for the various frequency-altering transformations while directly varying the underlying frequency spectrum of the target function in the attached pdf (more details in the comments for all reviewers). The results directly substantiate several claims made elsewhere in the paper, namely (1) low frequency target functions are easier for NNs to learn, (2) NN performance degrades with increasing frequency, a consequence of spectral bias, (3) our proposed method is the most robust against increased target function frequency of those considered, and (4) the difference in performance metrics (e.g., AUC) can be significantly larger numerically in datasets designed to exhibit a particular form of high-frequency behavior. In these synthetic dataset results, only the frequency scale factor and the random seeds for initializing the neural network were changed, showing that indeed our proposed inductive bias mitigates the adverse effects of NN spectral bias in these experiments.
**Comment 2: “Table 1 is helpful, but a bit more information about the NN model used would be helpful. Additionally, since a major axis of comparison here would be neural vs. non-neural (i.e. GBM) methods, it would be useful to add a comparison with XGBoost to this table (I expect that the transformations would have no effect on such a model, again making a clearer case). This could also go in the supplement.”**
We thank the reviewer for this suggestion. If accepted, we will add the corresponding details and results for the final version of the paper.
**Comment 3: “Despite a persuasive thesis and some data analysis that seems to support it, the empirical results are more mixed than expected ...”**
There are a few considerations relevant to the Reviewer’s comment and question. For one, although the numerical size of the corresponding performance changes between methods is small in some cases, our aggregated empirical results show significant and consistent performance improvement using the proposed method. Because we built the proposed method to realize the inductive bias out of existing frequency-altering transformations, the expectation is not necessarily to see unprecedented performance, but rather consistent top performance across datasets (see “Focus of our work and its impact on the experiment design” in the general comments for more in-depth discussion). Furthermore, most datasets for which the proposed method does not achieve the highest central value AUC either already have very high baseline performance metrics for all methods (kdd-small, pol), implying that spectral bias is not as significant an issue for these datasets, or poor performance for all NN methods (Diabetes, eye_movements), that may indicate other pathologies present in these datasets that our method is not designed to address. Finally, as the reviewer suggests, more sophisticated methods incorporating frequency reduction as an inductive bias, such as those accounting for interactions between features, could further improve performance in some cases.
**Comment 4: “The empirical experiments are not sufficiently described in the main text; at the very least, it would be useful for the authors to clearly list which datasets are used in the experiments.”**
We will add this discussion to the main text if accepted.
**Minor Comments. “The main takeaways from Figure 3 are not entirely clear in the main text. IIUC the takeaway is that the CI bars are narrower in the center and right columns? For each "conclusion" the authors make about Figure 3, it would be helpful to explain how/where this is demonstrated in the figure. Also, why is there a feature with such large variance and instability in the rightmost column?"**
We agree that this can be clarified further in the main text. If accepted, we will provide additional details on interpreting the figure. Different columns in the figure correspond to different weights (scale, rank, combine) for a select dataset ( electricity), and the intended message is the same for all three columns: across different random seeds (i.e., initializations), the weights converge to the same directions (the reviewer’s confidence interval interpretation is correct). This implies that the network learns similar representations across different random initializations, and does not overfit to the rest of the parameters. Connected to this, we observe that these representations can be reused across models: representations learned via MLPs can be used to improve TabNet (L343).
---
Rebuttal Comment 1.1:
Comment: Thank you for the considered response. These mostly address my concerns.
However, I will note that the authors' claim "most datasets for which the proposed method does not achieve the highest central value AUC either already have very high baseline performance metrics for all methods (kdd-small, pol), implying that spectral bias is not as significant an issue for these datasets" *assumes* that the proposed method does what is claimed here; these experiments don't *prove* this. These empirical results do not prove, in any causal sense, that the proposed method performs best only due to improving the spectral bias and not due to any other dataset factors; I don't consider this a particularly persuasive response and still feel that more controlled (possibly synthetic data) experiments are needed to support this assumption.
---
Reply to Comment 1.1.1:
Comment: We thank the Reviewer for the positive score and support towards further strengthening our work, and would like to use this opportunity receive additional feedback in order to improve the camera-ready version, if accepted.
We understand the Reviewer’s concern of not being able to draw causal relationships between experimental results on real-world datasets and spectral bias, as the heterogeneity of these datasets makes it challenging to rule out dataset characteristics other than target function frequency as contributing factors to performance differences. We would like to clarify that these empirical results comparing performance are primarily intended to demonstrate that our proposed method’s benefits translate to real-world datasets, complementing our analytic results that more concretely relate our proposed methods to target function frequency and spectral bias.
Our reply “…most datasets for which the proposed method does not achieve the highest central value AUC either already have very high baseline…” was intended to address the Reviewer’s Comment 3, providing a possible explanation for the variation in performance improvement across datasets. We agree with the Reviewer that our comment assumes that the proposed method does what we claim (namely, mitigate the effects of spectral bias). To strengthen this assumption, and in response to the Reviewer’s suggestion in Comment 1, in our general response we provided additional experimental results on synthetic data where we varied the frequency spectrum of the target function on otherwise identical datasets. This allowed us to rule out the effects from other dataset characteristics that may be clouding the evaluation with the real-world datasets. For low-frequency target functions, all methods yield comparable results, and as target function frequency increases, the performance of all methods degrades, consistent with the expectation from spectral bias. Also, as target function frequency increases, our proposed method maintains the best performance. Assuming that the performance degradation with increasing frequency is primarily due to NN spectral bias, these results show that our proposed method is the most effective at mitigating the impact of spectral bias on NN performance, at least for these synthetic datasets. Furthermore, the performance improvement observed with our method relative to the other baselines is significantly higher in these synthetic data experiments than that observed with the real-world datasets, consistent with the Reviewer’s expectation that differences other than target function frequency between real-world datasets can obscure the performance gains provided by our proposed method.
In conclusion, we believe that the experimental results we present show that (1) our proposed method is the best among the other baselines at mitigating the impact of spectral bias and (2) the benefits of our frequency-informed methods (i.e., selective rank and the trainable layer) translate to real-world datasets. We acknowledge that though these experiments do not _prove_ that the proposed methods are improving performance solely due to frequency-reduction, they do provide compelling evidence in support of this hypothesis. Such evidence initiates a new direction to be explored in the tabular deep learning domain. We believe translating this empirical evidence into proof is a necessary follow-up study, and we are open to suggestions for other controlled experiments that can be done to further corroborate our claims. | Summary: This paper proposes an inductive bias for tabular deep learning to bridge the performance gap between deep learning and tree-based methods on tabular data by reducing the frequency of irregular target functions through scaling and ranking transformations. Deep learning methods underperform tree-based methods on tabular data due to the interaction between irregular target functions and the tendency of neural networks to learn smooth functions. Spectral analysis tools can be used to identify the irregularity of functions described by tabular data sets and the potential for smoothing through scaling and ranking transformations. The proposed inductive bias of frequency reduction through scaling and ranking can significantly improve the performance of neural networks on tabular data without introducing additional complexity.
Strengths: - The paper provides a clear and concise explanation of the performance gap between deep learning and tree-based methods on tabular data.
- The proposed inductive bias is simple and easy to implement, without requiring additional hyperparameter tuning or complex model architectures.
- The paper provides empirical evidence of the effectiveness of the proposed method on various tabular datasets and neural network architectures.
Weaknesses: - The paper does not compare the proposed method with other existing methods for improving the performance of neural networks on tabular data.
- The paper does not provide a detailed analysis of the impact of scaling and ranking transformations on the interpretability of the learned models.
- The paper does not explore the potential limitations of the proposed method on highly irregular target functions or noisy data sets.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How does the proposed method compare with other existing methods for improving the performance of neural networks on tabular data, such as feature engineering or model ensembling?
- Can the proposed method be extended to handle highly irregular target functions or noisy data sets?
- How does the proposed method affect the interpretability of the learned models, and how can this be addressed?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - Investigate the impact of the proposed method on the interpretability of the learned models and explore ways to improve interpretability without sacrificing performance.
- Investigate the potential limitations of the proposed method for highly irregular target functions or noisy datasets and develop alternative methods to overcome these challenges.
- Investigate the potential of combining the proposed method with other existing methods for improving the performance of neural networks on tabular data, such as feature engineering or model ensembling.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments. Please see our responses below.
**Q1: “How does the proposed method compare with other existing methods for improving the performance of neural networks on tabular data, such as feature engineering or model ensembling?”**
We believe that the domain of tabular deep learning is in need of theoretical advancements, on top of the existing strong empirical studies (see [Related Work/Discussion/L94]). Consequently, in this work, rather than benchmarking our proposed method to compare against other existing approaches, we focus on showing that (1) NN spectral bias is an important factor contributing to the lack of performance on tabular data, and (2) learnings from this analysis can be applied to NNs straightforwardly (e.g., using selective rank (Eq. 11), or the proposed learnable convex combination (Eq. 13)).
Additionally, we would like to emphasize that our proposed method is developed to introduce minimal complexity (i.e., less than a single feedforward layer) and no additional hyper-parameters, therefore, it is expected to work synergistically with other existing approaches (see Experiments/Frequency Reduction with other Network Architectures). As a follow-up study, we plan to extend our work to (1) investigate the behavior of other approaches designed towards improving neural networks on tabular data through the lens of spectral analysis, (2) evaluate the effectiveness of the proposed inductive bias on these approaches and (3) provide more extensive benchmarking.
**Q2: “Can the proposed method be extended to handle highly irregular target functions or noisy data sets?”**
Our method is designed to improve the NN’s ability to learn highly irregular target functions, where the irregularity is expected to convey relevant information to the corresponding task (i.e., irregularity is not due to noise). For this case, we provide additional experimental results in the PDF submitted for rebuttal, where we evaluate the performance of our method and baselines on 16 synthetic datasets with increasing target function frequency (i.e., irregularity). From the figure, we observe that our method consistently performs the best among the methods considered. Nevertheless, its performance does degrade as the frequency of the target function increases, since even with reduced relative target function frequency, the NN still is subject to spectral bias.
If the highly irregular behavior of the target function is caused by noise, we expect the spectral bias of neural networks to act as a regularizer and help with generalization [1]. Although we have not explicitly studied the behavior of our method on noisy datasets, since it does not distinguish between high-frequency noise and high-frequency information, we do not expect the learned high-frequency patterns to improve generalization further. To extend our methods to noisy datasets, additional mechanisms to differentiate informative high-frequency components from noise will likely be required.
[1] Fridovich-Keil, S., Gontijo Lopes, R. and Roelofs, R., 2022. Spectral bias in practice: The role of function frequency in generalization. Advances in Neural Information Processing Systems, 35, pp.7368-7382.
**Q3: “How does the proposed method affect the interpretability of the learned models, and how can this be addressed?”**
In this work, we focus on identifying the fundamental reasons behind the lack of performance of neural networks on tabular data. Although we believe investigating the interpretability of neural networks is important (e.g., to argue for replacing tree-based ensembles with neural networks, as they are widely accepted as more performant and interpretable alternatives), it is not in the scope of this study, however, it can be a promising future direction given the observations we make in Appendix D1 (L731). Specifically, since the proposed method independently learns a set of weights for each feature, it may be possible to interpret these weights to understand feature importances (e.g., scaling weights that converge to zero for some features may point out irrelevant/redundant features) and types of information they contribute to the decision (e.g., features with high ranking weights may indicate the feature can be compressed into quantile/percentiles instead of raw values).
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed response.
My concerns have been addressed and I will keep my score. | Rebuttal 1:
Rebuttal: Dear reviewers, we thank you for all of your constructive feedback. In this general response, we would like to address some questions/concerns that arose from multiple reviewers. We address specific questions further on their corresponding threads,
**Focus of our work and its impact on the experiment design**
First, we feel it may be worth re-emphasizing the focus of our work and how it informs the metrics we choose to evaluate. The domain of tabular deep learning has been progressing rapidly, where we see extremely creative and effective ways of improving NN performance. On the other hand, there is little work that focuses on identifying the fundamental reasons behind the typical gap between tree-based and NN-based techniques on tabular data. We see our work as one of the early steps towards building a lens that can help analyzing this performance gap. We believe it opens up a new direction for researchers to study and understand the behavior of NNs on tabular data, and eventually entirely address the NN performance gap (i.e., compared to tree-based ensembles). We believe our central contribution is showing, through both theoretical and empirical analysis, that (1) NN spectral bias is an important factor contributing to the performance gap between trees and NNs, and (2) learnings from this analysis can be applied to NNs straightforwardly (using e.g., selective rank, or the proposed learnable convex combination (refer to sections)). Since we ultimately design a technique that utilizes existing transformations such as rank and scale, the expectation from these learnings is not necessarily to see unprecedented performance gains; rather, it is to see that approaches that make use of existing practices for dealing with tabular datasets (e.g., scaling, ranking) but with spectral bias in mind consistently perform better and converge faster across different datasets. In effect, the proposed approach we motivate through our analysis can be seen as alleviating the need for exhaustively searching for the correct transformation to apply to each feature in each different dataset to reduce the impact of spectral bias. For this reason, although in some instances the magnitude of performance gained on individual metrics with our proposed method appears numerically small, the overall performance improvement and convergence speed up is significant and consistent across datasets, which can be clearly seen through the normalized metrics we consider in the main text. This strongly suggests that extensions of our methods beyond the simple frequency-altering transformations we consider could provide even numerically larger gains. Note that our approach can also be used in any other NN-based method, and the benefits may vary depending on how much various architectures are impacted by spectral bias (c.f., our TabNet experiment).
**Choice of Datasets**
Second, we would like to clarify our choice of datasets for our experiments. We chose all datasets from Grinsztajn et al (NeurIPS’22), which considered several criteria for selection in order to benchmark tree-based vs NN-based model performance (Grinsztajn et al / Appendix 3). . Notably, we limited our study to numerical feature-heavy classification datasets (14) among the regression and classification datasets (45) provided in Grinsztajn et al. They are all real-world datasets that span fundamentally different problems and data characteristics, from classification for particle physics experiments (e.g., MiniBooNE) to predicting credit card defaults in Taiwan (e.g., credit card clients), containing combinations of both raw data and engineered features. Furthermore, in contrast to Grinsztajn et al, we do not remove complexities such as missing features, low/high-cardinality categorical features, and class imbalance, or discard any samples from the datasets, in order to retain the diverse properties present in these datasets
**Additional Synthetic Data Experiments**
Due to the diverse nature of the datasets considered in our experiments, there were some concerns voiced about being able to reliably draw conclusions about the effect of our proposed inductive bias from our experiments. Also, the different characteristics naturally lead to variation in the performance gain observed from our methods. To address these concerns, in this reply we also provide additional results from the synthetic data experiment in our original submission that support several of our observations made on the real-world datasets. In particular, in the attached pdf we show results obtained by varying the overall frequency of the target function in Eq. 24, keeping all other parameters fixed. For each choice of scale factor, we trained the same two-hidden-layer MLP on the raw (unit-scaled) data, and with rank, scale (standardization), and our proposed convex combination layer for 200 epochs (25 epochs were used to tune the learning rate). The results in the attached figure clearly show (1) low frequency target functions are easier for NNs to learn, (2) NN performance degrades with increasing frequency, a consequence of spectral bias, (3) our proposed method is the most robust against increased target function frequency, and (4) the difference in performance metrics (e.g., AUC) can be significantly larger numerically in datasets designed to exhibit a particular form of high-frequency behavior. Taken together, these results further substantiate our claims that frequency reduction as an inductive bias can significantly improve performance on datasets with high-frequency target functions.
Pdf: /pdf/9bf44046ef48c1a0b31274ace2d83b6165cf6084.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
GAIA: Delving into Gradient-based Attribution Abnormality for Out-of-distribution Detection | Accept (poster) | Summary: The work looks at leveraging gradient-level attribution information in order to detect semantically shifted OOD samples. In particular, the paper proposes two post-hoc OOD detection methods that leverage the extracted gradient attribution called GAIA-A and GAIA-Z. Both the proposed GAIA-A and GAIA-Z methodologies show strong empirical performance across a wide range of OOD detection tasks.
Strengths: 1. The paper provides analysis on an underexplored domain relating to attribution gradient and how to leverage this information for OOD detection.
2. The resulting post-hoc OOD detectors are simple to implement and show strong empirical performance across a wide range of OOD detection tasks.
Weaknesses: There are several other post-hoc methods that have not been included in the empirical evaluation. For example, KNN[1] and Lee and AlRegib [2] would provide fair points of comparison. In addition, if possible, the reviewer would also encourage the authors to include deviations with each empirical result.
[1] Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. Out-of-distribution detection with deep nearest neighbors. In International Conference on Machine Learning, 2022.
[2] Jinsol Lee and Ghassan AlRegib. Gradients as a measure of uncertainty in neural networks. In 2020 IEEE International Conference on Image Processing (ICIP), pages 2416–2420. IEEE, 2020.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. On line 227 the authors hypothesize that "channel-wise average abnormality is better suited for application in scenarios with a large label space." However, the results from CIFAR-100 setting seem to indicate that the choice between GAIA-A and GAIA-Z is not simply based on how large the label space is.
2. On line 151, the authors discuss observations on the number of zero partial derivations for OOD samples. Are there empirical analyses that corroborate these observations?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The reviewer would recommend the authors consider adding additional points of comparison as stated in the weakness section above as well as additional runs of the method under models trained with differing seeds.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer CiZ8
We appreciate your thoughtful review of our work. And we address your questions below:
> Q1: Additional runs of the method under models trained with differing seeds.
Thank you for providing valuable suggestions. For the CIFAR benchmarks (CIFAR10 and CIFAR100), **we trained five ResNet34 models, each using cross-entropy loss under different random seeds**. The tests indicate that **the performance of GAIA-A and GAIA-Z remains stable, with fluctuation data relative to the average showcased in Q2**.
The ResNetV2 (BiT) model utilized for ImageNet-1K benchmark is sourced from the repository of Big Transfer [1], which is widely employed as a test model in OOD detection research. Given that retraining is time-consuming, we will include the range of data fluctuation on ImageNet-1K in the paper after conducting the tests. In our paper, **we will add deviations with each empirical result of our methods in the main experiments (both CIFAR and ImageNet-1K benchmarks).**
[1] Kolesnikov, Alexander, et al. Big transfer (bit): General visual representation learning. ECCV, 2020.
> Q2: Fair comparison with KNN and Lee and AlRegib.
- Comparison with KNN (ResNet34 on CIFAR10 and CIFAR100, ResNetV2 on ImageNet-1K).
| Methods | CIFAR10 (Avg FPR95) $\downarrow$ | CIFAR10 (Avg AUROC) $\uparrow$| CIFAR100 (Avg FPR95) $\downarrow$ | CIFAR100 (Avg AUROC) $\uparrow$|ImageNet-1K (Avg FPR95) $\downarrow$ | ImageNet-1K (Avg AUROC) $\uparrow$|
| ------ | ----| ---- | --- | --- | --- | --- |
|KNN| 28.14% $\pm$ 1.97%| 95.86% $\pm$ 0.31%| 82.33% $\pm$ 4.87%| 70.21% $\pm$ 8.35%| 53.97% | 85.01% |
|GAIA-A | 12.73% $\pm$ 2.01% | 97.53% $\pm$ 0.22% | 68.97% $\pm$ 3.13%| 86.42% $\pm$ 1.49% | **37.42%** | **91.90%** |
|GAIA-Z| **3.26%** $\pm$ 1.39%| **99.28%** $\pm$ 0.26% | **29.10%** $\pm$ 3.36%| **94.93%** $\pm$ 0.52% | 50.65% | 89.03% |
**In comparison to KNN on the standard-trained network, our method continues to maintain a comprehensive advantage.** An important characteristic of post-hoc OOD detection methods is that they **do not require modifying the training procedure and objective** (all the baselines in our paper are tested on the standard-trained network). Notes that **GAIA-A and GAIA-Z are plug-and-play methods that even do not require in-distribution data for estimation or adjustment of hyperparameters**. Hence, we consider that comparing with KNN on a standard-trained network is fair. Following your suggestion, **we have revised the manuscript to include KNN as a baseline and conducted comparisons in the main experiments.**
Nevertheless, we still provide the test results of KNN with contrastive learning for the reviewer's reference. Through comparison, it can be observed that GAIA-A and GAIA-Z perform similarly to the KNN+ method. Especially on the CIFAR10 and CIFAR100 benchmarks, GAIA-Z still outperforms the KNN+. The performance improvement of KNN+ relies on the use of a model trained through contrastive learning, which intervenes in the model's training process. And this could introduce more challenges and uncertainties in practical applications.
| Methods | CIFAR10 (Avg FPR95) $\downarrow$ | CIFAR10 (Avg AUROC) $\uparrow$| CIFAR100 (Avg FPR95) $\downarrow$ | CIFAR100 (Avg AUROC) $\uparrow$|ImageNet-1K (Avg FPR95) $\downarrow$ | ImageNet-1K (Avg AUROC) $\uparrow$|
| ------ | ----| ---- | --- | --- | --- | --- |
|KNN + contrastive learning| 10.41% | 97.62% | 65.53% | 88.42% | 38.47% | 90.91% |
- Comparison with Lee and AlRegib.
**Please refer to Table 1 in the PDF submitted with the global rebuttal**. Details have been provided in our Supplementary Material (Appendix Section H).
As suggested in GradNorm [1], to ensure a fair comparison, the gradients of uniform noise are used as a surrogate for OOD data during the training of the binary classifier. And our methods outperform Lee and AlRegib. **Regarding this point, we have placed the comparison with other gradient-based methods in the appendix and introduced a "Discussion" section in the paper to further analyze.**
> Q3: GAIA-A and GAIA-Z are not simply based on how large the label space is.
Thank you for your thorough observation. We posit this hypothesis due to that GAIA-A has the ability to aggregate information from all predicted outputs, which demonstrates superior performance on the extensive ImageNet-1K dataset (1000 categories). Furthermore, within the same benchmark, the performance of GAIA-A improves as more labels are aggregated. In Table 5 of the paper, we present the result using only the top-1 label, showcasing a significant performance gap compared to the aggregated results.
We find your perspective to be more rigorous, and we agree that the improvement in GAIA-A's performance on ImageNet is influenced by factors beyond just the expansion of the label space (such as image dimensions, model scale, etc.). **Regarding this point, we have amended these descriptions accordingly. Highlighting the benefits of a broader label space aggregation for GAIA-A aligns with our understanding**, and we will supplement our study with experiments that illustrate how GAIA-A's performance changes on the same benchmark as the aggregated label output quantity increases.
> Q4: Empirical analyses that corroborate these observations?
**Please refer to Figure 3 in the PDF submitted with the global rebuttal**. For detailed descriptions, please refer to Appendix Section B in the supplementary materials.
In Figure 3, we visualize the sparsity of gradients on feature maps across all channels at a specific layer (measured by the proportion of zero gradients in the entire feature map). Each data point represents the sparsity of one feature map. In deeper layers, OOD images tend to generate attribution gradient matrices with extremely low sparsity across a substantial number of channels, resulting in a remarkable reduction of zero values in the matrix, indicating an abnormal behavior.
---
Rebuttal 2:
Title: Response to Author Rebuttal
Comment: The reviewer would like to thank the author for providing the additional experimental evaluations and answering all the existing questions.
**Fair comparison with KNN and Lee and AlRegib**
These experimental results of Lee and AlRegib match with prior expectations and it is encouraging that GAIA is able to outperform all the prior gradient-based and KNN distance-based OOD detection methodologies.
**Other comments and questions**
I thank the reviewer for the clarification and the additional edits. Unfortunately, my general thoughts on the paper remain consistent so I won't raise the review score any higher but I want to encourage the authors to further organize the paper in an effort to improve clarity.
---
Rebuttal Comment 2.1:
Title: Thanks for the comments
Comment: We appreciate the response from the reviewer. In our rebuttal, it seems that we have addressed the suggestions and questions raised by the reviewer. Please kindly inform us if the reviewer has any other ongoing concerns or questions?
Regarding the issue of the paper's organization, we attach great importance to it. We have already made adjustments and are continuing to refine it further.
Our improvements:
- We have relocated the "Related Work" section to the end of the manuscript to ensure a smoother flow of our idea.
- In Section 4, we have repositioned Equations 3 and 4 to an additional theoretical analysis section at the end of the paper. Detailed explanations of these equations are provided in the appendix for greater clarity. Moreover, in order to enhance the reader's comprehension of our proposed idea, we have incorporated the visualization that elucidates the connection between the attribution phenomenon and the two abnormalities.
- For the ablation experiments, we have restructured the sequence and introduced guiding statements at the beginning to clarify the logical flow of the ablation study. Our ablation study begins by validating the effectiveness of each step of the method, moving from outermost to innermost. Moreover, all Figures and Tables have been arranged coherently, following the sequence from GAIA-A to GAIA-Z.
- We have carefully corrected the typos (grammar, table formatting, descriptive details, mathematical expressions, and so forth). | Summary: The proposed gradient-based attribution method in this paper is a promising approach that can help distinguish between ID and OOD patterns. By analyzing the uncertainty that arises when models attempt to explain their predictive decisions, the method can provide a more robust and reliable approach to detecting OOD data, which is superior over previous works in gradient-based OOD detection. The authors test their approach on well-known OOD detection benchmarks such as ImageNet and CIFAR, which are widely used in computer vision research. The results demonstrate that the proposed approach is effective in detecting OOD data, outperforming state-of-the-art methods by a significant margin.
Strengths: 1. Innovative perspective on quantifying disparities between in-distribution (ID) and out-of-distribution (OOD) data based on analyzing the attribution of embedding features. This approach offers a new perspective on detecting OOD data.
2. Introduces two forms of abnormalities for OOD detection, i.e., the zero-deflation abnormality and the channel-wise average abnormality, which may help to identify OOD data more accurately and effectively.
3. Proposes GAIA, a simple and effective approach that incorporates Gradient Abnormality Inspection and Aggregation, which can be readily applied to pre-trained models without further fine-tuning or additional training.
4. Demonstrates superior performance on both commonly utilized (CIFAR) and large-scale (ImageNet) benchmarks compared to competing approaches, reducing the average FPR95 by 26.75% on CIFAR10 and by 45.41% on CIFAR100.
Weaknesses: 1. The authors claim that one can analyze the uncertainty raised when model making decisions via the gradient attribution, and it is the main contribution of this paper. However, I didn't find any theoretical explanation or heuristic justification about how the contribution value is related to uncertainty. With so many systematic analysis in Section 4.1, I think the only key point in supporting why the proposed method works is "Zero-deflation Abnormality" (forgive me if I misunderstand), but it does not explain why the suggest method is a good indicator of prediction uncertainty.
2. The authors claim using the softmax prediction across each class is a novely of their method. However, from my understanding, it is equivalent to the case in using the KL divergence between uniform distribution and model softmax prediction as the objective, which has been well discussed in GradNorm. Therefore, more or less, the author overclaim their contribution.
3. More advanced works, such as ASH, are not compared in the paper. Experimental results with larger models on ImageNet (such as ViT) should also be considered, following previous works such as [1].
4. Since there are some previous works in studying the gradient-based OOD detection, a natural question is why the proposed method is superior over GradNorm. For example, it seems that the authors conduct gradient wrt. model outputs (instead of parameters as in GradNorm), is there any reason for your choice (either heuristically or theoretically); it seems that the authors use the number of non-zero gradients in OOD scoring, what is the superiority to GradNorm in using the L2 norm of gradients.
[1] Yiyou Sun, Yifei Ming, Xiaojin Zhu and Yixuan Li. Out-of-distribution Detection with Deep Nearest Neighbors. ICML, 2022.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please refer to the part of Weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Please refer to the part of Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer qAeX
Thank you for your constructive feedback. Before addressing your concerns, we believe there might be **some misconceptions about our method that need clarification**. We will begin by clarifying your certain viewpoints.
> Clarification point 1: **Our methods (both GAIA-A and GAIA-Z) are not based on the gradients with respect to model outputs.**
GAIA-A and GAIA-Z are grounded in attribution gradients, specifically **the gradient of the $c$-label output score $S_c(z)$ with respect to the input variable $z$ (i.e., $\frac{\partial S_c(z)}{\partial z}$)**. The term "input variable" generally refers to the input unit or a specific feature unit in the intermediate feature map.
**Attribution gradients are widely utilized in visual explainability techniques** (such as GradCAM, LayerCAM, Integrated Gradients, etc.), and **they are unrelated to the gradients commonly associated with our typical understanding of network optimization (i.e., gradients of the parameters)**. **Hence, our approach fundamentally differs from other gradient-based OOD detection methods**. To provide a better understanding, let's explain the concept of attribution gradients. Attribution gradients refer to the sensitivity of a particular input variable w.r.t. model's predicted output to indicate how the feature influences the model's prediction.
> Clarification point 2: About the assertion Weakness 2 raised by the Reviewer.
Our approaches leverage the abnormality in attribution gradients (as mentioned in Clarification Point 1) for OOD detection, **not utilizing the softmax prediction across each class**. Similarly, **our methods are not equivalent to the objective mentioned by the reviewer**. There exists a fundamental distinction between these approaches.
The part that might have caused the reviewer's misunderstanding could be related to the output component of GAIA-A. The core of GAIA-A is centered around utilizing the abnormality in the channel-wise average gradients at certain intermediate layers, which act as visual explanation weights. As we explored the aggregated label space, we discovered that employing log softmax aggregation and splitting it into two parts (output and inner) can further enhance the detection performance. However, it's important to emphasize that **the primary source of enhancement still originates from the inner component**. For more details, please refer to the "Method" section, the "Influence of Label Space Aggregation" in the ablation experiments, and Appendix Section E in the supplementary material.
> Q1: Our main contribution.
**Please refer to the global rebuttal.**
> Q2: not explain why the suggested method is a good indicator of prediction uncertainty.
In this paper, we begin by **addressing an observed phenomenon** (gradient-based attribution methods yield uncertain results on OOD data, **Please refer to Figure 1 in the PDF of global rebuttal**). We then **formulate an explanation for this phenomenon in the context of Taylor expansion (Eq. (3) and Eq. (4))**. Based on the explanation, we **introduce two types of abnormalities to reflect and quantify this uncertainty**, which we derived as effective tools of detecting OOD samples.
GAIA-Z is derived from the Null-player axiom [1], which states that a feature should be considered as having zero importance when it makes no contribution to the model's output. GAIA-Z focuses on determining how certain the model is about its final predictions. On the other hand, GAIA-A places more emphasis on detecting the abnormality arising from gradient-based attribution methods, e.g., GradCAM when they sum the attribution gradients as channel-wise weights (the proof of the relation please refer to appendix section D in the supplementary material). GAIA-A aims to collect extreme outlier values in this process.
[1] Khakzar, Ashkan, et al. "Do explanations explain? Model knows best." CVPR. 2022.
> Q3: Superiority of our methods.
The comparative experiments between them have been included in the PDF of the global rebuttal. Our approach achieves superior performance, offers a fresh perspective, and supports batch processing. (details please refer to Appendix H in the supplementary material)
> Q4: More comparisons and results with larger models.
**The comparison with ASH is shown in the PDF of the global rebuttal (Table 3)**.
From the data in the Table, ash_s@90 method slightly outperforms GAIA-A on ImageNet-1K. However, in other conditions, GAIA-A or GAIA-Z performs better. While **ASH achieves competitive results on the ImageNet dataset through careful parameter tuning, it is highly sensitive to its hyperparameters and lacks empirical parameters**. Moreover, these parameters can vary with different model architectures, affecting the practicality of the method. Furthermore, ASH, React, and Rankfeat are similar in that they all rely on deep features of the model. These methods tend to perform well on large datasets like ImageNet but show poorer performance on smaller datasets like CIFAR benchmarks. **In contrast, the GAIA method doesn't require parameter adjustments and directly achieves good results**.
**Following your suggestion, we have revised the manuscript to include ASH as a baseline and conducted comparisons in the main experiments**.
We consider ResNetV2 (BiT) to be a large-scale CNN model. Regarding ViT, our method is not applicable to transformer-based models. ViT employs positional encoding to capture spatial information, posing challenges for attribution (**see Figure 4 of the pdf**). Due to these reasons, existing attribution methods are rarely applied to ViT models, resulting in poorer performance for GAIA on ViT (with an average FPR of 49.13%, compared to the KNN of 38.02%). We acknowledge this limitation of the current GAIA method and have included it in the "Limitations" section. Each proposed method is not without imperfections, and there exists potential for areas of improvement.
---
Rebuttal Comment 1.1:
Title: Thanks for the response.
Comment: The authors have addressed my concerns, and I would like to raise my score to 5.
---
Reply to Comment 1.1.1:
Title: Thanks for raising the score
Comment: Thank you! We appreciate the updated score and we are glad that our clarifications have addressed your concerns. Thanks again for taking the time to review our paper and providing detailed comments. | Summary: In this paper, the authors propose a novel perspective for quantifying the disparities between in-distribution (ID) and out-of-distribution (OOD) data by analyzing the uncertainty that arises when models attempt to explain their predictive decisions. They investigate the abnormality in gradient-based attribution methods when dealing with OOD data and introduce two forms of abnormalities. They further propose GAIA, a simple and effective approach based on gradient of attribution models for OOD detection. Experimental results demonstrate that GAIA outperforms state-of-the-art methods on CIFAR and ImageNet benchmarks.
Strengths: (1) The paper offers an innovative perspective on quantifying the disparities between ID and OOD data by analyzing the uncertainties in gradient-based attribution methods, based on zero-deflation abnormality and channel-wise average abnormality.
(2) The proposed GAIA approach is simple and effective and does not require further fine-tuning or additional training, achieving superior performance to previous SOTAs.
Weaknesses: (1) The paper lacks clarity and organization in presenting the proposed approach and the experimental results. For example, the mathematical derivation in Section 4 is difficult for readers to follow the flow of ideas. It is unclear to me how Eq. 4 is derived from Eq.3, and what is |·| refer to?
(2) Ablation studies are comprehensive but messy. It is hard to follow the logic of paper writing. Please give an overview of ablation before the details of each ablation study for better understanding.
(3) I am curious about the effect of combining GAIA-Z and GAIA-A while no relevant experiment or explanation about it.
(4) The paper writing should be further improved. There are lots of mistakes. For example, the highlight in Table 2 is wrong (some bests and second bests are upside-down); In line 169 of Page 5, the range of c_i should be placed below the argmax.
(5) Although the motivation is clear and the proposal is effective, the lack of clarity, implementation details and deficient writing quality are major weaknesses that impact the overall quality of the paper. I recommend you can polish this paper carefully and submit it to other conferences such as cvpr or iclr, which will be good work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) I am curious about the effect of combining GAIA-Z and GAIA-A while no relevant experiment or explanation about it.
(2) The paper writing should be further improved. There are lots of mistakes. For example, the highlight in Table 2 is wrong (some bests and second bests are upside-down); In line 169 of Page 5, the range of c_i should be placed below the argmax.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer kfJE
We sincerely appreciate your valuable feedback on our paper and thank you for taking the time to review it. In response to your review, we have addressed the issues raised and made improvements to enhance the clarity, organization, and overall quality of the paper. Below, we outline our rebuttal addressing the specific points you mentioned:
> Concern 1: Clarity and Organization.
Thank you for providing valuable suggestions for improving our paper. We would like to start by outlining the original organization of ideas in the initial manuscript and then proceed to detail the improvements we have made:
Original organization:
In this paper, we begin by addressing an observed phenomenon (gradient-based attribution methods yield cluttered and uncertain results when providing visual explanations for out-of-distribution image prediction by the model). We then formulate an explanation for this phenomenon in the context of Taylor expansion (Eq. (3) and Eq. (4)). Based on the explanation, we introduce two types of anomalies, namely, Zero-deflation abnormality and Channel-wise average abnormality, which we derived as effective tools of detecting out-of-distribution (OOD) samples.
The advantage of this approach is that **it benefits readers' understanding of our idea and the motivation behind our proposed method** (mentioned by Reviewer 1 and Reviewer 2). However, a drawback is that Eq. (3) and Eq. (4) lack sufficient context, which may pose challenges for readers when engaging with the methods.
Our improvements:
- We have relocated the "Related Work" section to the end of the manuscript to ensure a smoother flow of ideas.
- In Section 4, we have repositioned Equations 3 and 4 to an additional theoretical analysis section at the end of the paper. Detailed explanations of these equations are provided in the appendix for greater clarity. Moreover, in order to enhance the reader's comprehension of our proposed idea, we have incorporated the visualization that elucidates the connection between the attribution phenomenon and the two abnormalities. This visualization expands upon the concept presented in Appendix B by visually highlighting the relationship between the attribution phenomenon of OOD samples and the two abnormalities.
- For the ablation experiments, we have restructured the sequence and introduced guiding statements at the beginning to elucidate the logical flow of the ablation study. Our ablation study begins by validating the effectiveness of each step of the method, moving from outermost to innermost. We first verify the effect of the Frobenius norm (2-norm), followed by a deeper exploration of the aggregation's effectiveness on the input space (Influence of input space aggregation across different layers (blocks)) and label space (Influence of label space aggregation). Lastly, we validate the overall method's effectiveness across various model capacities. Moreover, all Figures and Tables have been arranged in a coherent manner, following the sequence from GAIA-A to GAIA-Z.
> Concern 2: The effect of combining GAIA-Z and GAIA-A.
This suggestion is highly insightful, as GAIA-A and GAIA-Z exhibit distinct strengths and weaknesses on datasets of different scales. **Combining them presents an interesting idea with potential benefits**.
However, the reason we initially did not explore this avenue in our manuscript was due to that **GAIA-Z and GAIA-A are mutually independent**. They are two approaches that explore model attribution in different directions to reflect the model's uncertainty. GAIA-Z is derived from the Null-player axiom, which states that a feature should be considered as having zero importance when it makes no contribution to the model's output. GAIA-Z focuses on determining how certain the model is about its final predictions. On the other hand, GAIA-A places more emphasis on detecting the abnormality arising from gradient-based attribution methods (e.g., GradCAM) when they sum the attribution gradients as channel-wise weights. GAIA-A aims to collect extreme outlier values in this process.
Therefore, **the magnitudes of the scores generated by these two methods are not directly comparable**. **We attempted a direct summation on benchmarks yet observed no significant enhancement**. The original intention behind proposing GAIA-A and GAIA-Z was to present two plug-and-play methods (not requiring training on in-distribution data) with a focus on minimizing the additional parameters.
However, **the direction provided by the reviewer is valuable**, such as normalizing the scores generated by GAIA-A and GAIA-Z using in-distribution data before summation or introducing a tunable or learnable coefficient between the two scores. **We plan to conduct further in-depth research along these lines, incorporating the experimental results in the appendix and highlighting them in future work**.
> Concern 3: Typos in the paper.
We appreciate your attention to the writing quality and errors in the paper. We have dedicated considerable effort to improving the paper's writing. Specifically, we have carefully proofread the manuscript and corrected the typos you mentioned, as well as we have found (grammar, table formatting, descriptive details, mathematical expressions, and so forth). | Summary: This paper presents an approach to OOD detection in deep neural networks. The authors propose a method based on analyzing the uncertainty that emerges when models attempt to rationalize their predictive decisions. The abnormalities are found by using two strategies: the zero-deflation abnormality that takes advantage of the observation that attribution gradients in OOD data have more zero values than in-distribution and channel-wise average abnormality that captures variations in the feature maps of OOD data compared to in-distribution data.
The experiments are performed on ImageNet-1K and CIFAR benchmarks and include ablation studies to understand the impact of model capacities.
Strengths: The paper provides an interesting approach to OOD detection by leveraging the concept of attribution gradients. I find the two forms of gradient abnormalities for OOD detection very promising in approaching the problem.
The paper is very well written and clear. The experiments use robust setups on well-known ImageNet-1K and CIFAR benchmarks and the ablation studies are interesting to validate the hypotheses and claims. I suggest experimenting with more datasets to validate the proposition in different contexts.
OOD detection is an important problem in the field of deep learning and has numerous practical applications in enhancing the safety and reliability of deep neural network applications.
Weaknesses: A more direct comparison with other gradient-based methods would be beneficial. While the authors compare their method to GradNorm, it would be interesting to see how GAIA compares to other methods that also utilize gradient information for detection or other tasks.
I think a more detailed explanation of the proposed abnormalities is needed, as it would be beneficial to have a more intuitive or visual explanation to aid in understanding these concepts. The authors could elaborate more in this direction.
While ImageNet-1K and CIFAR are standard benchmarks, it would be important to see the method performs on different types of data, such as text or audio data or even different image datasets.
The paper does not address the computational efficiency and scalability of the proposed method. I think a discussion on the proposed method's potential limitations and failure modes would be a valuable addition.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could the authors provide a comparison with other gradient-based methods?
Can the authors elaborate more or provide visual aids to help intuitively understand the proposed abnormalities?
Can the method deal with other types of data (text, audio, etc.)?
Could the authors comment on the computational efficiency and scalability of GAIA?
The authors claim that no hyperparameters are required for their method. Can the authors clarify this point?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: It is an interesting paper, but a few areas could use more in-depth discussion on the scalability and computational efficiency of the method. While the authors show impressive performance on several benchmark datasets, they do not fully discuss the method's robustness to various forms of OOD shifts, which are common in real-world scenarios.
While both method variants show superior performance compared to other methods, there's a noticeable difference between the two. GAIA-Z generally achieves a lower FPR95 and a higher AUROC than GAIA-A. Is GAIA-Z the more precise variant? Can you elaborate on this direction?
The authors do not discuss where and why the GAIA-A and GAIA-Z methods fail. Failure case analysis is crucial for understanding the limitations of the methods and guiding future research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Exp8
Thank you for your positive evaluation of our paper. Below, we will address each of your questions:
>Q1: Could the authors provide a comparison with other gradient-based methods?
Regarding the comparative table, **please refer to table 1 in the PDF of the global rebuttal**. Additionally, detailed descriptions and analysis can be found in Appendix Section H of the supplementary materials. **We will include the comparison in a "Discussion" section of the main paper**.
> Q2: Can the authors elaborate more or provide visual aids to help intuitively understand the proposed abnormalities?
**Please refer to figure 1, figure 2, and figure 3 in the PDF of the global rebuttal.**
In Figure 1, we visualize the abnormality in the attributions. Details of figure 2 and figure 3 can be found in Appendix Section B of the supplementary materials.
> Q3: Can the method deal with other types of data (text, audio, etc.)?
Our method is based on attribution gradients, which are extensively utilized in visual interpretation techniques, primarily focusing on the CNN-based network. This line of research also holds significance for certain CNN-based classification tasks. For instance, in audio classification tasks, where GAIA has shown its efficacy (**Please refer to table 2 in the PDF of the global rebuttal**).
> Q4: Could the authors comment on the computational efficiency and scalability of GAIA?
Compared with other gradient-based methods, GAIAs support batch processing, as the attribution gradients are independent for each input feature.
We conducted tests on a Tesla V100 to measure the average time taken to process a single image under different batch conditions for both CIFAR benchmarks and the ImageNet benchmark.
| Settings | MSP | Energy | ODIN | ReAct | GradNorm* | RankFeat | GAIA-A | GAIA-Z|
| ------ | ----| ---- | --- | --- | --- | --- | --- | --- |
|CIFAR(batch=1) | 5.10ms | 5.2ms | 7.23ms | 32.85ms | 25.32ms | 8.85ms | 36.39ms | 35.59ms |
|CIFAR(batch=128) | 0.24ms | 0.26ms| 0.37ms | 0.73ms | 25.32ms| 3.03ms | 1.01ms | 0.52ms |
|ImageNet(batch=8) | 49.11ms | 46.03ms | 67.24ms | 59.43ms | 143.47ms | 79.61ms | 54.14ms | 87.24ms |
***For GradNorm, the batch size has been consistently set to 1.**
GAIA methods require the use of attribution gradients from the feature layer of the last block, and the primary time consumption lies in obtaining the attribution gradients through backpropagation. However, **as the batch size increases, GAIA experiences accelerated processing since the computations relative to GAIA are comparatively straightforward. With parallelization, a single backward pass can yield attribution gradients for multiple images**.
> Q5: The authors claim that no hyperparameters are required for their method. Can the authors clarify this point?
GAIA-A and GAIA-Z are plug-and-play OOD detection methods, meaning they do not have hyperparameters that need adjustment on different in-distribution datasets. For example, methods like Energy, ODIN, ReAct, and Mahalanobis require tuning their hyperparameters (temperature, perturbation, thresholds, etc.) for different in-distribution datasets.
> Q6: Discuss the method's robustness to various forms of OOD shifts.
This is a valuable suggestion. We have also taken into consideration various shift scenarios.
Specifically, we explore whether our approach can still categorize images as ID when there is a domain shift (covariate shift). We employed CIFAR-10C as the dataset exhibiting domain shift and compared its scores with those of CIFAR-10. The similarity in scores between the two datasets indicates the robustness of our method in handling in-distribution test data affected by domain shift.
> Q7: Is GAIA-Z the more precise variant? Can you elaborate on this direction?
We consider that GAIA-A is also an effective method. GAIA-Z performs well when dealing with small-scale images, such as the CIFAR benchmarks. However, when applied to larger datasets like ImageNet, it may encounter more significant disturbances and challenges. On the other hand, GAIA-A has the advantage of collecting more anomalies in larger label spaces, making it more effective on datasets like ImageNet. Additionally, the two-stage enhancement process further improves its performance on the ImageNet dataset. Moreover, GAIA-A's ability to detect OOD through analyzing anomalous behavior in visual attribution methods provides insightful implications and holds potential for further exploration in this direction.
> Q8: Where and why the GAIA-A and GAIA-Z methods fail.
Newer models like Vision Transformers (ViT), which are based on transformers, excel in feature extraction. However, they may not align well with image-specific characteristics. For instance, ViT employs positional encoding to capture spatial information, posing challenges for attribution. Due to these reasons, existing attribution methods are rarely applied to ViT models (**see figure 4 of the pdf**), resulting in poorer performance for GAIA on ViT. **We acknowledge this limitation of the current GAIA method and have included it in the "Limitations" section. Furthermore, it serves as a potential avenue for future improvements and exploration.**
---
Rebuttal Comment 1.1:
Comment: Thank you for providing a comprehensive rebuttal and the effort to improve the paper.
The visualizations incorporated offer a clearer understanding of the abnormalities. Looking at the images, I was also considering if a signal-to-noise analysis could provide some insights if the features considered are more related to the foreground than the background.
Also, considering your last comment, I'm not sure I understand the problem of the proposed approach to deal with ViT. Given the growing popularity and effectiveness of transformer architectures, how to adapt or evolve the proposed to accommodate these new advancements?
---
Reply to Comment 1.1.1:
Comment: We appreciate your insightful viewpoints and thoroughly enjoy the discussion.
> Q1: Looking at the images, I was also considering if a signal-to-noise analysis could provide some insights if the features considered are more related to the foreground than the background.
This is indeed an interesting perspective. We have given consideration to this issue and have come up with a few directions to explore.
If we consider the attribution map on the feature map as a grayscale image, with attribution values as pixel values, features deemed to contribute to predictions (useful features) could be considered as signals.
Under the assumption of the highly effective visual explanation method, these useful features, such as the foreground, should have high pixel values, making them meaningful signals. On the other hand, the background should possess low pixel values, representing insignificant noise. Therefore, we can use the signal-to-noise ratio (SNR) to quantify the uncertainty of the model's attribution explanations:
$$\text{SNR} (
\text{db})= \frac{\text{Signal Power}}{\text{Noise Power}}$$
If the model makes a confident and correct decision, the SNR should be very high. However, this approach also presents some challenges: 1) How do we define the scope of the signal (foreground). We may need a threshold or algorithm to differentiate which regions on the attribution map are background and which are foreground. 2) When there are no in-distribution objects in the entire image (OOD), we should consider the entire image as noise. In our observations, this scenario can result in widely scattered and exceptionally high noise values. We need to design a more appropriate matrix to reflect SNR.
**We consider that this approach could be highly beneficial for object-level OOD detection.** When an image contains multiple objects that need to be recognized, we can consider the regions within the bounding boxes as the signal (foreground) and those outside the boxes as the background. Then we can calculate the SNR on the attribution map to reflect the model's uncertainty about predictions within these regions. This method might be effective in detection scenarios with relatively stable environments, such as autonomous driving, industrial identification, etc.
> Q2: Considering your last comment, I'm not sure I understand the problem of the proposed approach to deal with ViT. Given the growing popularity and effectiveness of transformer architectures, how to adapt or evolve the proposed to accommodate these new advancements?
Given the current trends, providing visual explanations for large models has been a focal point for both the interpretability academic community and the engineering efforts in recent years. Honestly, this is also the research direction we are currently pursuing.
Overall, our approach involves utilizing model-explained prediction uncertainty for OOD detection. For large models, our ongoing research primarily focuses on two directions:
- Direction 1: **For transformer-based model.**
Despite the differences between transformer-based architectures and the convolutional feature layers of CNNs, methods based on attribution gradients can still provide explanations for predictions made by models using transformer structures (such as ViT). For the transformer-based model, we consider the attribution gradients on the attention matrix.
Many traditional CAM methods, including GradCAM, are initially proposed based on CNN structures, and therefore, their performance on transformer-based models might be suboptimal. **However, there is a line of improvements available now to enhance the gradient-based visual explanations on such models**. One prominent example is [1]. Building upon these enhancements, we are currently researching how to identify attribution gradient abnormality on the attention matrix to reflect the model's uncertainty.
Furthermore, the Attention mechanism in transformer-based models can also offer directions for visual explanations. Researching uncertainty in this context can further enhance OOD detection.
[1] Chefer, Hila, Shir Gur, and Lior Wolf. Transformer interpretability beyond attention visualization. CVPR. 2021.
- Direction 2: **For CNN-based backbone.**
Although the mainstream of large models isn't purely CNN-based, many multimodal large models (like CLIP) and downstream tasks (such as object detection) still utilize CNNs as the backbone networks for visual feature extraction.
How to reflect the model's uncertainty in visual feature extraction on these backbones is also a topic we are currently researching. | Rebuttal 1:
Rebuttal: We appreciate all the reviewers' time and valuable feedback. We are delighted that the reviewers found our article to be **clear**, **easy to read** (**R1**, **R2**), and regarded our method as both **simple and effective (R1, R3, R4, R5)**. It is also great to hear that our findings are **interesting and innovative (R2, R3, R4)**.
We have addressed the reviewers' comments and concerns in individual responses to each reviewer. And we have summarized the changes as follows:
- We have added comparisons with RankFeat (block 3+4) (**R1**), ASH (**R4**) and KNN (**R5**) in our experiments.
- We have included further theoretical analysis of GAIA-Z based on the Null-player axiom in the appendix (**R1, R4**).
- We have introduced a new "Discussion" section to analyze the differences between our method and other gradient-based approaches (**R2, R4, R5**).
- We have discussed the limitations of GAIA on transformer-based models in the "Limitations" section (**R2, R4**).
- We have restructured the presentation of the "Method" section to enhance understanding (**R3**).
- We have dedicated considerable effort to improving the paper's writing (**R3**) and have corrected the typos (**R1, R3**).
- We have rephrased our assumptions regarding performance on a large label space in a more rigorous manner and will supplement our study with experiments (**R5**).
- We plan to conduct further in-depth research on the effect of combining GAIA-Z and GAIA-A (**R3**).
- We will add deviations with each empirical result of our methods in the main experiments (**R5**).
***R1:** TjRi, **R2:** Exp8, **R3:** kfJE, **R4:** qAeX, **R5:** CiZ8.
**Our main contribution**:
Our main contribution is that we target **bridging the gap between OOD detection and visual interpretation by utilizing the uncertainty of a model in explaining its own predictions**. Visual explainability methods are employed to attribute a model's predictive outcomes. We endeavor to uncover uncertainty when explaining anomalies outside the label space, aiming to detect OOD samples. This is a novel domain, as the realm of visual interpretability based on attribution gradients is vast and theoretically comprehensive. We believe this constitutes a highly promising avenue for research.
Below are the responses to the open questions raised by R1:
>Open question 1: Pre-training on one dataset and testing on another, yet the dataset itself is naturally domain inconsistent, is there a real-world application for this?
OOD detection aims to ensure the trustworthiness and safety of machine learning models in an open-world setting. In practical deployment, pre-trained models may encounter unknown natural inputs that surpass their cognitive capabilities, leading to overconfident decision-making. For instance, for a trained food classifier, when a user uploads a non-food image, we hope to have a method that can recognize this as an unknown input and refrain from misclassifying it into any food category erroneously. In safety-critical scenarios like autonomous driving systems, when the driving system identifies unknown objects, it should trigger an alert and hand over control to the driver.
>Open question 2: What is the OOD sample in the real-world application? If we only have the dataset on a sunny day but a bird on a rainy day, would that be considered an OOD sample? Will the method GAIA in the paper recognize birds in rainy weather as OOD samples? Or only images that do not contain objects will be treated as OOD?
OOD detection aims to ensure safety during deployment by identifying natural samples that the model was not originally trained to recognize. For instance, if the model is trained to classify animal categories such as birds, dogs, and cats (ID), then airplanes can be considered as OOD samples (called semantic shifts). If the training data includes images of birds on a sunny day, the images of birds on a rainy day will be considered from a different domain. This is typically referred to as covariate shift. In such cases, researchers often focus on the model's transferability or generalization capabilities, which are separate research topics (such as domain adaptation, domain generalization, etc). In the benchmarks we use, the testing and training ID samples are kept separate. Taking the CIFAR10 benchmark as an example, the model is trained on the training set, but during the evaluation, the model is tested on an unseen testing set, which includes birds in different environments. Our method is capable of effectively distinguishing between ID samples (that the model has seen during training) and OOD samples (that are novel to the model).
Pdf: /pdf/c2752a8af4d15820791e6726263b41d93ee98b8d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the authors present an innovative perspective on quantifying the disparities between in-distribution (ID) and out-of-distribution (OOD) data. The authors observed that gradient-based attribution methods face challenges when assigning feature importance to OOD data, leading to significantly divergent explanation patterns.
To address this issue, the authors investigate how attribution gradients contribute to uncertain explanation outcomes and introduce two forms of abnormalities for OOD detection: the zero-deflation abnormality and the channel-wise average abnormality. To overcome these challenges, they propose a new approach called GAIA (Gradient Abnormality Inspection and Aggregation), which is simple yet effective. Importantly, GAIA can be directly applied to pre-trained models without the need for further fine-tuning or additional training. The results demonstrate that GAIA outperforms existing approaches on commonly utilized benchmarks such as CIFAR and large-scale benchmarks like ImageNet. Specifically, on CIFAR benchmarks, GAIA reduces the average FPR95 by 26.75% on CIFAR10 and by 45.41% on CIFAR100 when compared to competing methods, highlighting its superiority in OOD detection.
Strengths: - The idea of the paper is clear, the writing is easy to follow, and provides theoretical support.
- The proposed method GAIA is simple and effective on CIFAR benchmarks.
Weaknesses: But I'm more concerned about the effectiveness of the method:
- In the comparison in Table 1 on large-scale benchmark ImageNet-1K, the proposed GAIA-A and GAIA-Z methods only compare the inferior version of Rankfeat (Block 4), but don't outperform the SOTA version of Rankfeat (Block 3 + 4)$[1]$.
- There are two versions of GAIA: GAIA-A, and GAIA-Z, which have their own strengths and weaknesses on different benchmarks, but there is no guidance in the paper on which method to use in different benchmarks.
- Important points of innovation and the bulk of proofs are related to GAIA-A, but GAIA-A does not outperform GAIA-Z on most benchmarks (e.g. CIFAR). This reduces the effectiveness of GAIA-A.
- Typo: line 173, "with with".
Reference:
[1] Song Y, Sebe N, Wang W. Rankfeat: Rank-1 feature removal for out-of-distribution detection[J]. Advances in Neural Information Processing Systems, 2022, 35: 17885-17898.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. What is the meaning of zero baseline output S_c ($0$) in Eq. (3)?
2. What are the inherent reasons for the difference in effectiveness between GAIA-A and GAIA-Z?
For more please refer to the Weaknesses part.
Open question:
- Pre-training on one dataset and testing on another, yet the dataset itself is natural domain inconsistent, is there a real-world application for this?
- What is the OOD sample in the real-world application? If we only have the dataset on a sunny day, but a bird on a rainy day, would that be considered as an OOD sample? Will the method GAIA in the paper recognize birds in rainy weather as OOD samples? Or only images that do not contain objects will be treated as OOD?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer TjRi
Thank you for your constructive feedback. We are pleased that you find our presentation clear and easy to follow. And we address your questions below:
**For the open questions, we have included the replies in the global rebuttal due to the limit constraints of the rebuttal (not exceeding 6000 characters).**
> Q1: What is the meaning of zero baseline output $S_c (0)$ in Eq. (3)?
In Eq. (3), "baseline" can be understood as **the initial value used as a reference in attribution methods**. This analytical form is commonly adopted in visual interpretability methods. In our paper, we consider the feature values to be all zeros as the "zero baseline", which is commonly adopted for analyzing gradient-based attribution methods. $S_c(0)$ represents the model's c-label output w.r.t baseline. Then in Eq. (4), $|S_c(z) - S_c(0)|$ represents the c-label output change caused by feature $z$ in the model's predictions. Using the zero baseline also simplifies the form of the Taylor expansion, making it easier for further analysis. With this approach, the output change in Eq 4 can be represented as a combination of feature gradients with respect to the output. Consequently, it becomes possible to determine the contribution of each feature gradient to the final output change.
> Q2: What are the inherent reasons for the difference in effectiveness between GAIA-A and GAIA-Z?
In this paper, **we target bridging the gap between OOD detection and visual interpretation by utilizing the uncertainty of a model in explaining its own predictions**. GAIA-Z and GAIA-A are two approaches that explore model attribution in different directions to reflect the model's uncertainty. GAIA-Z is derived from the Null-player axiom [1], which states that a feature should be considered as having zero importance when it makes no contribution to the model's output. For example, if the model makes overconfident predictions for OOD samples (e.g., classifying the grassland as a bird), **GAIA-Z focuses on determining how certain the model is about its final predictions**. In contrast, when using visual interpretation to explain why a sample is classified as a bird, GAIA-Z might produce many non-zero importance features, leading to messy attribution maps. On the other hand, **GAIA-A places more emphasis on detecting the abnormality arising from gradient-based attribution methods** (e.g., GradCAM when they sum the attribution gradients as channel-wise weights. GAIA-A aims to collect extreme outlier values in this process.
[1] Khakzar, Ashkan, et al. "Do explanations explain? Model knows best." CVPR. 2022.
> Q3: No guidance in the paper on which method to use in different benchmarks;
GAIA-Z performs well when dealing with small-scale images, such as the CIFAR benchmarks. However, when applied to larger datasets like ImageNet, it may encounter more significant disturbances and challenges. On the other hand, GAIA-A has the advantage of collecting more anomalies in larger label spaces, making it more effective on datasets like ImageNet. Additionally, the two-stage enhancement process further improves its performance on the ImageNet dataset.
> Q4: Comparision with Rankfeat (Block 3 + 4).
Thanks for your comments. We compared our method with Rankfeat (block 4) because **our approaches also only utilize information from block 4**. When using the same amount of information, GAIA-A achieves better results. Additionally, **including data from an extra block would lead to a decrease in inference time performance**. In our testing on CIFAR10 (batch size 128), Rankfeat (block3+4) takes an average of 5.3ms to process a single image, while GAIA-A takes an average of 1.01ms and GAIA-Z takes an average of 0.52ms. In this case, GAIA is at least five times faster. **Surely, to provide a more objective comparison, we have modified Table 1 and Table 2 to add data from RankFeat (block3+4) as a reference**. While RankFeat (with an average FPR95 of 36.80%) performs slightly better than GAIA-A (with an average FPR95 of 37.42%) on ImageNet, **GAIA still demonstrates significant improvements on CIFAR benchmarks**.
| Methods | CIFAR10 (Avg FPR95) $\downarrow$ | CIFAR10 (Avg AUROC) $\uparrow$| CIFAR100 (Avg FPR95) $\downarrow$ | CIFAR100 (Avg AUROC) $\uparrow$|ImageNet-1K (Avg FPR95) $\downarrow$ | ImageNet-1K (Avg AUROC) $\uparrow$|
| ------ | ----| ---- | --- | --- | --- | --- |
|RankFeat (3+4)| 62.46% | 83.62% | 90.75% | 68.99% | **36.80%** | **92.15%** |
|GAIA-A | 12.73% | 97.53% | 68.97% | 86.42% | 37.42% | 91.90% |
|GAIA-Z| **3.26%** | **99.28%** | **29.10%** | **94.93%** | 50.65% | 89.03% |
> Q5: Proof in GAIA-Z less than GAIA-A and the effectiveness of GAIA-A.
As mentioned in Q2, GAIA-Z, and GAIA-A both provide valuable insights into model uncertainty, offering a way to connect OOD detection and visual explanation methods. **They are mutually independent, and each method holds its value and insights**.
We have conducted a motivational analysis of the zero-deflation abnormality in section 4. In response to the reviewer's suggestion (analysis for GAIA-Z is less extensive compared to GAIA-A), **we included further theoretical analysis of GAIA-Z based on the Null-player axiom (mentioned in Q2) in the appendix**.
**We consider that GAIA-A is also an effective method**. On the ImageNet-1k benchmark, GAIA-A performs better than GAIA-Z, indicating that GAIA-A has its own advantages over other gradient-based methods on larger datasets. ImageNet-1K is a large-scale dataset comparable to CIFAR benchmarks in terms of scale. Moreover, GAIA-A's ability to detect OOD through analyzing anomalous behavior in visual attribution methods provides insightful implications and holds potential for further exploration in this direction.
> Q6: Typo: line 173, "with with."
We sincerely appreciate your thoroughness and attention to detail. We have diligently reviewed and corrected the typos in our manuscript.
---
Rebuttal 2:
Title: Response to Authors
Comment: The authors addressed my concerns to some extent. Could the authors provide a more professional analysis of why "GAIA-A has the advantage of collecting more anomalies in larger label spaces"? Not just from an experimental observation view.
---
Rebuttal Comment 2.1:
Title: Thanks for your comments
Comment: Thank you for taking the time to read our rebuttal and engaging in timely discussions with us.
> Q: Could the authors provide a more professional analysis of why "GAIA-A has the advantage of collecting more anomalies in larger label spaces"? Not just from an experimental observation view.
Of course! We put forward this viewpoint due to that GAIA-A has the ability to aggregate information from all predicted outputs.
As mentioned in the rebuttal, GAIA-A aims to gather extreme anomaly values of weights (channel-wise average gradients) in the gradient-based attribution method to reflect uncertainty. Consider that the aggregation region has $L$ feature layers, and each layer has $K$ channels for ease of representation. The overall expectation of the abnormality $\mathbb{E}[\epsilon]$ can be represented as:
$\begin{equation}
\mathbb{E}[\epsilon] = \sqrt{\sum\limits_{l\in L} \sum\limits_{k\in K} \| \mathbb{E}[\epsilon|\textbf{A}^{kl}]\|^2}
\end{equation}$
Then, we analyze the expectation of the abnormality on an individual k-channel,
$\begin{equation}
\mathbb{E}[\epsilon|\textbf{A}^{kl}] = \left\| \sum\limits_{i, j} \frac{\partial \Gamma(S_c(\textbf{A}^l))}{\partial \textbf{A}^{kl}_{ij}} \right\| = \left\| \Omega \right\|
\end{equation}$
where $\Gamma(\cdot)$ represents a method of aggregating over label outputs, while $\Omega$ signifies the aggregation of attribution gradients. In our paper, we utilize the log-softmax aggregation approach. For the sake of simplicity in analysis, let us consider aggregation as a summation. Thus, the aggregation gradient $\Omega$ can be decomposed as follows:
$\begin{equation}
\Omega = \frac{\partial \sum\limits_{c\in C}S_c(\textbf{A}^l)}{\partial \textbf{A}^{kl}} = \sum\limits_{c\in C} \frac{\partial S_c(\textbf{A}^l)}{\partial \textbf{A}^{kl}} = \sum\limits_{c\in C} \omega_c
\end{equation}$
where $w_c$ represents the weight corresponding to the attribution for the $c$-label output $y_c$. Therefore, the expectation of abnormality on this channel can be represented as the sum of the expectation of abnormality across the whole label space.
$\begin{equation}
\mathbb{E}[\epsilon|\textbf{A}^{kl}] = \sum\limits_{c\in C} \mathbb{E}[\epsilon|\textbf{A}^{kl}, y_c]
\end{equation}$
In other words, since GAIA-A relies on collecting weight anomalies, a larger label space allows it to gather more anomalies, thereby better reflecting the model's uncertainty. | null | null | null | null | null | null |
Binarized Neural Machine Translation | Accept (poster) | Summary: They propose a novel binarization technique for Trans3 formers applied to machine translation (BMT), the first of its kind. They identify and
address the problem of inflated dot-product variance when using one-bit weights and activations. Specifically, BMT leverages additional LayerNorms and residual connections to improve binarization quality. Experiments on the WMT dataset show that a one-bit weight-only Transformer can achieve the same quality as a float one, while being 16× smaller in size. They further conduct a scaling law study using production-scale translation datasets, which shows that one-bit weight Transformers scale and generalize well in both in-domain and
out-of-domain settings.
Strengths: 1 A novel binarized NMT model is proposed, which may be useful for the production server.
2 The proposed scaling factor to mitigate the activation variance is simple and effective. And the whole model architecture looks convincing.
3 The experimental results are very solid.
Weaknesses: 1 How the training and inference efficiency change compared to the float model is not discussed.
2 The proposed code should include the readme, which can help the reviewer to run and check the main results.
2 typo error, line 348
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: More discussion about the limitations should be presented.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: How the training and inference efficiency change compared to the float model is not discussed.**
Thanks for commenting on the efficiency side. It is not accessible because we are not aware of an ecosystem (accelerator combined with software stack) that supports such measurement for 1-bit models. However, there is convincing evidence that 1-bit matmuls will have high performance. For example, NVIDIA’s A100 architecture [1] shows 1-bit matmul is 8x faster than 8-bit measured by TFLOPS throughput, though requiring NVIDIA’s own assembly. Also [2] shows binary matmul is 9x-12x faster compared to 8-bit measured by latency on ARM CPUs. In that light, among other goals, our work is aiming to contribute to the assessment of whether binary ML hardware+software is a viable future direction for the ML accelerator industry.
[1] NVIDIA Ampere Architecture Whitepaper. Table 3.
[2] Tom B., et al., Larq Compute Engine: Design, Benchmark, and Deploy State-of-the-Art Binarized Neural Networks, MLSys’21.
**Q2: The proposed code should include the readme, which can help the reviewer to run and check the main results**
We really appreciate the reviewer checking out our software design, and we will write a good readme indeed. As promised in the abstract (line 12), we will open-source the code together with a detailed reproduction instruction upon acceptance.
**Q3: typo error, line 348**
Thanks for pointing it out. We will remove the redundant word “the”.
**Q4: More discussion about the limitations should be presented.**
We will make the limitations more clear. (1) As stated in line 237, the activation-activation matmul binarization quality is still not ideal. We list 8 BMT variants in Table 1, among which BMT-8 (the one with activation-activation matmul binarization) has the largest loss and BLEU drop. We imagine a lot of the future effort will be dedicated to it. (2) As stated in the last part of the conclusion, there are several unanswered questions in this work, for example, shall we scale up the depth or width for the binarized Transformer and how should we design a mixed-precision scheme using binarization and potentially other formats? We will expand these discussions in the next revision.
---
Rebuttal Comment 1.1:
Title: Rebuttar Readed
Comment: Thanks for your answering. | Summary: The paper proposes a novel quantization scheme to binarize transformer machine translation models. The method consists of inserting additional layer normalization for activations and also additional residual connections. The authors demonstrate good results on the WMT test set especially for weight-only binarization, and promising results for both weight and activation binarization.
Strengths: - The paper is well written and the method description is precise.
- Promising results in the fully binarized setting, which is difficult to obtain
- One of the few papers attempting binarization for natural language generation
- Great scaling law study of binary models on a large training set - which is perhaps the most interesting part of the paper
Weaknesses: - Binarized activation results are still quite poor.
- Main results only test on one benchmark (WMT17 en-de)
Although it is contemporary, please consider comparing to the following work:
https://arxiv.org/abs/2306.01841
Having some strong established baseline could make the conclusions of the work stronger.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: none
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Binarized activation results are still poor.**
Yes, as highlighted in line 237 in Section 4.1, one challenge we identified in this work is, more precisely, that the attention layer activations are the bottleneck to a high-quality binarized Transformer for a sequence generation task. We hope our analysis will shed light on the future improvement to it.
**Q2: Main results only test on one benchmark (WMT17 en-de)**
Despite the WMT17 en-de dataset, we also train the model using our in-house translation corpus that contains 3-billion web-crawled sentence pairs in Section 4.2 and evaluate on both in-domain and out-of-domain datasets that span multiple categories. The detailed train and evaluation dataset information is provided in Appendix A.1. More evaluation results can be found in Appendix Figure 7, 8, and 9. In the main paper we put the key results and distill the knowledge we learnt.
**Q3: Although it is contemporary, please consider comparing to the suggested work.**
Thanks for sharing this very valuable contemporary work. Since it uses a different dataset than ours, we need to re-evaluate its approach and compare. Though we cannot finish the comparison during rebuttal given our dataset size, we will definitely cite it in related works. | Summary: The authors introduce a new technique for binarization in Transformers that can be applied to machine translation known as Binarized Neural Machine Translation (BMT). They have adapted the binarization functions and training methods from PokeBNN to help address the "inflated dot-product variance" issues that arise when using one-bit weights and activations. The authors propose the use of LayerNorm in place of fixed scaling factors and make some architectural changes to improve the quality of the binarized model. Experiments show the BMT have the ability to scale and generalize effectively in both in-domain and out-of-domain settings.
Strengths: 1. The analysis of Variance Inflation in Binarization in Section 3.2 is interesting. The BERT model should also have this problem. What is the difference between it and the Transformer structure?
2. The paper provides a comprehensive experimental evaluation of the proposed BMT model on a 3-billion in-house parallel corpus. The authors make detailed analysis on the scaling law study and demonstrate the binary models can achieve the same BLEU score as float models with a smaller size.
Weaknesses: 1. Pre-LayerNormalization Transformer has already been proposed for a few years, and what's the difference with the Section 3.4 Replacement of Scaling Factor with LayerNorm?
"On Layer Normalization in the Transformer Architecture"
2. There is a lack of comparison with other quantization methods for Transformer models.
3. Based on the experimental results of in-house training data in Section 4.3, BMT still has about 2 BLEU gap compared to the floating-point model.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The authors claim that "Experiments on the WMT dataset show that a one-bit weight-only Transformer can achieve the same quality as a float one, while being 16× smaller in size." Could you please clarify how the model size reduction by a factor of 16 is defined here? Is it just a reduction in storage size of the model weights?
2. In lines 57-60, the authors mention that "each word in the output translation sequence affects the generation of the next word", so what is the quality impact of binary quantization on long text generation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors did not state any relevant limitations of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: What’s the difference between the proposed layernorm in Section 3.4 compared to the existing pre-layernorm Transformer?**
Note that pre- or post-layernorm for the Transformer architecture indicates the layernorm position **outside** of the entire FFN module. Whereas in our proposal, each linear layer **inside** the FFN module needs to be followed by another layernorm (scaling factor). They are extra layernorms and only needed for 1-bit models to address the dot-product variance inflation.
**Q2: There is a lack of comparison with other quantization methods for Transformer models.**
We compared other binarization methods for Transformer in Table 1, last row (labeled as “Base”). We outlined the literature on transformer quantization in Section 2 beginning line 79. Many previous works focused on 8-bit and 4-bit quantization. Only a few of them focused on binarization. Among the binarization works, the model is BERT and the binarization function is directly applied with additional distillation methods. We adopted their quantization method to our encoder-decoder Transformer and listed the result in Table 1 in the last row.
**Q3: Based on the experimental results of in-house training data in Section 4.3, BMT still has about 2 BLEU gap compared to the floating-point model.**
Yes, that's correct. We concluded that in Section 4.3 line 332. There is still room to improve BMT when the dataset is comprehensive. One way is, as the scaling law suggests, scaling up the binarized model. As stated in line 333, the 30L6L binary model achieves the same BLEU score as the 8L6L float model while being still 6.7x smaller in model size.
**Q4: Could you please clarify how the model size reduction by a factor of 16 is defined? Is it just a reduction in storage size of the model weights?**
Yes, the model size reduction is defined by the reduction in the amount of bits used to store the weights, which is very important for model serving as highlighted in the challenges in Section 1 from line 24. Currently, model weights are typically stored as float16 or bfloat16 for efficiency reasons. Our technique allows the storage as 1 bit per weight, achieving a 16x compression. We will clarify this in the paper as well, thank you.
**Q5: What is the quality impact of binary quantization on long text generation?**
We analyzed the impact of binarization on translation generation quality in Section 4.3. Many of the translation sentences have dozens of tokens and some even longer. Some samples can be seen in Appendix B.1. We synthesized the knowledge learnt from there: (1) there will be around 2 BLEU quality loss at the same model size; (2) the quality loss can be recovered by scaling up the binary model a bit or generating more samples when selecting the translation text.
Additionally, we study scaling law behavior on several in-house and open-source datasets (Appendix, Table 2). These datasets are a mix of different sequence lengths, ranging from short to long range sequences. From Figure 7 and 8 (appendix), we don't observe any outstanding difference in the slopes (p_e, p_d) of the scaling law on any dataset. This denotes that binary and float models have a similar scaling behavior in various scenarios with different sequence lengths, domains, naturalness etc.
**Q6: The authors did not state relevant limitations of the method.**
We will make the limitations more clear. (1) As stated in line 237, the activation-activation matmul binarization quality is still not ideal. We list 8 BMT variants in Table 1, among which BMT-8 (the one with activation-activation matmul binarization) has the largest loss and BLEU drop. We imagine a lot of the future effort will be dedicated to it. (2) As stated in the last part of the conclusion, there are several unanswered questions in this work, for example, shall we scale up the depth or width for the binarized Transformer and how should we design a mixed-precision scheme using binarization and potentially other formats? We will expand these discussions in the next revision.
---
Rebuttal Comment 1.1:
Title: After rebutal
Comment: Thanks for your answer. | Summary: This paper presents a binarized neural translation model based on an encoder-decoder structure. The proposed method initially analyzes the challenges associated with binarized encoder-decoder models. The primary challenges arise from the significant impact of binarizing both weights and activations on result variance. Therefore, the authors primarily employ two methods to control variance: incorporating a scaling weight and adding a layer normalization layer.
To assess the effectiveness of the method, the authors report promising results obtained from experiments. The results indicate that the bottleneck stems from the attention activations. Binarizing different parts demonstrates a considerable variance in performance. However, with careful consideration of the binarization position, the proposed model achieves competitive results while significantly reducing its size. The ablation study further demonstrates the effectiveness of the proposed method in mitigating variance.
Strengths: The motivation behind the method is clearly defined. The method begins by analyzing the reasons behind the failure of directly binarizing the weights. It was discovered that this failure can be attributed to a variance problem where binarization statistically inflates the magnitude, resulting in abnormal signal propagation within the neural network. To address this issue, the authors implemented two widely-used solutions: scaling weight and layer normalization. By employing these approaches, the authors were able to develop a binarized machine translation model that yields competitive results.
Furthermore, the authors conducted scaling law experiments, revealing that binarized models also exhibit scaling law characteristics. Despite the impact of binarization on model performance, the gap can be bridged by increasing the model size. Due to the significant reduction in model size achieved through binarization, the proposed model can effectively deliver superior performance with a smaller model size.
Weaknesses: The main idea contribution of this work revolves around identifying and addressing the challenge of variance in binarizing machine translation models. However, the utilization of layer normalization and scaling weights in a straightforward manner somewhat limits its novelty and originality.
It appears that there is a lack of comparison with 8-bit or 4-bit results. Are these methods also subject to the scaling law, where larger models yield better performance?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In Table 1, it is unclear which row represents the result of the BMT model with a size of 25MB. Does it imply that all rows correspond to models of approximately 25MB? Each row applies binarization to different weights and activations, resulting in significant performance variance across various settings.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Are 8-bit or 4-bit subject to the scaling law?**
Thanks for pointing out this comparison and we will add it to the scaling law section. 8-bit and 4-bit models are studied more often and they do exhibit a scaling law where larger models yield better performance [1]. However, in the previous study the scaling law breaks starting from 3 bits where models seem not to converge. In our study we are able to re-establish the scaling law even for 1-bit models. We show that the dot-product variance is the key and a simple scaling factor can be a remedy. We also show in the ablation section 5 that indeed without a scaling factor a 1-bit model cannot converge but it can if with.
[1] Tim D. et al., The Case for 4-bit Precision: k-bit Inference Scaling Laws, arxiv 2023.
**Q2: In Table 1, it is unclear which BMT variant has a size of 25MB. Does it imply that all rows correspond to models of approximately 25MB?**
As indicated by the caption, models with 1-bit weights will have 25MB of size. It means a model whose W_QKV, W_out, W_FFN are all labeled by checkmarks, indicating they are binarized, has a model size of 25MB. In Table 1, all BMT variants except for BMT-2 have 25MB. BMT-2 has only FFN binarized, which is also a potentially useful special variant that we want to demonstrate.
---
Rebuttal Comment 1.1:
Comment: Thanks for the kind response. I would like to keep my score. | Rebuttal 1:
Rebuttal: We thank all reviewers for their positive feedback, considering our problem analysis and empirical experiments as a good contribution to the community. We also appreciate all comments and suggestions. We will address questions below in separate threads. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work proposes to binarize matrix multiplication for significantly saving memory and, thus, reducing latencies at inference time that is crucial for serving encoder-decoder model. Basic idea is to employ a binary variant of weights and inputs with scaling parameters in feed-forward and multi-head attention computation. Since the scaling hyparameters are critical for the binarized model, this work employs layer normalization to alleviate the issue so that appropriate normalization is performed automatically. Experiments are carried out mainly for WMT en-de by comparing against float variants with different number of parameters.
Strengths: * Although the idea of binarizing matrix multiplication is now new, this work has a couple of contributions to binarize an encoder-decoder architecture. For example, the use of layer normalization sounds good to me given the stability analysis for scaling hyperparameters.
* Binarization is performed not only feed-foward, but also multi-head attention to reduce the computation. For stability, residual connection is introduced in the model following the prior work, but the choice is carefully designed.
* Experiments are systematically carried out by varying the number of parameters and the proposed method is compared with the float variants.
Weaknesses: * Although this work has comparisons in terms of loss and translation qualities measured by, e.g., BLEU, this work is not presenting actual speed measured by seconds. I understand the condition might be varied, it is better to run experiments to see whether the proposed method is actually faster than a float baseline.
* Discussion in residual connection is a bit weak, and only a figure is presented. I feel better to show by equations to avoid any confusions. Also, further analysis on why residual connection is needed in the output projection will be a plus for this submission.
* Given the binarization, it might be prone to high variance in experimental results. It would be good to run some analysis by running multiple times and show averages/variances.
* Translation qualities are measured only by BLEU. Given the limitations of the metric, better to present alternative metrics as well, e.g., BLEURT or COMET.
* No experiments for larger data. It is minor, though, better to run larger data to see if the proposed method will also work or not.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * There exists almost no clear description about how binarization is applied in section 3.5. Given Figure 1, it sounds like layer norm is shared across query, key and value, and it is not clear what is the motivation of sharing.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Better to discuss the current limitation on the experiments, e.g., scale and variances.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: I understand the condition might be varied, it is better to measure the speedup of the proposed method.**
Thank you for commenting on the speedup measurement. It is not accessible because we are not aware of an ecosystem (accelerator combined with software stack) that supports such measurement. However, there is convincing evidence that 1-bit matmuls will have high performance. For example, NVIDIA’s A100 architecture [1] shows 1-bit matmul is 8x faster than 8-bit measured by TFLOPS throughput, though requiring NVIDIA’s own assembly. Also [2] shows binary matmul is 9x-12x faster compared to 8-bit measured by latency on ARM CPUs. In that light, among other goals, our work aims to contribute to the assessment of whether binary ML hardware+software is a viable future direction for the ML accelerator industry.
[1] NVIDIA Ampere Architecture Whitepaper. Table 3.
[2] Tom B., et al., Larq Compute Engine: Design, Benchmark, and Deploy State-of-the-Art Binarized Neural Networks, MLSys’21.
**Q2: Better to show equations on the residual connection and an analysis on why it is needed in the output projection.**
Thanks for the suggestion. We will add an equation on residual connection in Section 3.5: Out = LN(X*W) + X, where X is the output of score-value einsum. As briefly discussed in Section 3.5, we added the residual connection because the binarization of the output projection layer will possibly mislead the optimizer since the gradients through a binarization function are computed by the straight-through estimator. The shortcut link will carry the raw gradient from the previous layer, therefore partially mitigate this issue. We also conducted an ablation study in Section 5(b) to demonstrate the effectiveness of the proposed shortcut.
**Q3: Would be good to show averages/variances of results.**
We were able to reproduce the model loss ourselves within several runs, but since we need to train each model size for one million steps on three billion sentence pairs and we have limited hardware resources, the number of losses for each result is not sufficient to establish statistical analysis. However, given the dataset and model size, we expect that the variance on experimental results should be small. We will attempt to add such analysis in the final version.
**Q4: Better to present alternative metrics as well, e.g., BLEURT or COMET.**
We measured and reported BLEURT in Section 4.3. In that section we compared the model generation quality using both BLEURT and BLEU on our in-house translation dataset. We show that BLEURT scores also improve as we scale up the model size. Additionally, we run MBR decoding with BLEURT metric and show that given enough MBR samples, binary models match the quality of float models. We will highlight it better in the paper.
**Q5: No experiments for larger data. It is minor, though, better to run larger data to see if the proposed method will also work or not.**
The scaling law study in Section 4.2 and model generation quality study in Section 4.3 are both carried out on our in-house large production-scale translation dataset. This dataset has 3 billion En-De sentence pairs and is one of the largest translation datasets in the ML community.
**Q6: How binarization is applied in Section 3.5? Why is layernorm shared across QKV in Figure 1?**
We will clarify this in the paper. Binarization will be applied via casting the inputs right before a matmul to 1-bit using the function defined in line 106. Naming it as x_b= bin(x) for short, a binarized linear layer A*W will be computed as bin(A) * bin(W).
Each of the QKV projections has its own independent layernorm, i.e., the parameters in layernorm are not shared. Figure 1 draws a big rectangle to represent the layernorms because in practical implementations the QKV projections are usually combined into a large single matmul. Thanks for pointing it out, and we will add corresponding captions to reflect this.
**Q7: Better to discuss the current limitation on the experiments.**
We will make the limitations more clear. (1) As stated in line 237, the activation-activation matmul binarization quality is still not ideal. We list 8 BMT variants in Table 1, among which BMT-8 (the one with activation-activation matmul binarization) has the largest loss and BLEU drop. We imagine a lot of the future effort will be dedicated to it. (2) As stated in the last part of the conclusion, there are several unanswered questions in this work, for example, shall we scale up the depth or width for the binarized Transformer and how should we design a mixed-precision scheme using binarization and potentially other formats? We will expand these discussions in the next revision.
---
Rebuttal Comment 1.1:
Title: After rebuttal
Comment: Thanks for your answers.
- Q5: I think I missed the large scale experiments.
- Q4: If f my understanding is correct, Figure 4(b) is reporting BLEURT score for MBR decoding. Basically, the figures are comparing two different decoding strategies, i.e., beam search by searching for the best translation according to the model, and MBR by taking consensus in the sampled translations. It is not clear why showing BLEU for beam search and BLEURT for MBR, which is leading to non-systematic comparison. I would suggest the comparison should be systematic, e.g., showing both BLEU and model-based score, e.g., BLEURT and Comet, for both decoding strategies to see whether the trends are the same. Also note that MBR-BLEURT will be heavily biased toward BLEURT given that BLEURT is employed for MBR. I would suggest a different metric, Coment, for a fair comparison.
---
Reply to Comment 1.1.1:
Title: Response to follow-up comments
Comment: Thanks for commenting on Q4 and providing very good suggestions.
**“I would suggest showing both BLEU and model-based scores for both decoding strategies to see whether the trends are the same.”**
Presenting a Cartesian product of (MBR, beam search) x (BLEU, BLEURT/COMET) is indeed a good suggestion and ablation. We will attempt such measurement. The goal of Figure 4 is to compare whether binary and float models produce translations of similar qualities (or how large is the gap). Therefore, we initially chose a common setting of beam search decoding + BLEU score. Despite that, we were also aware of the discussion about the limitations of BLEU in the machine translation community. We thus provided the quality evaluation in another common setting of MBR + BLEURT, and we wanted to show that the lessons concluded in Section 4.3 are independent of the two settings.
**“MBR-BLEURT will be heavily biased toward BLEURT given that BLEURT is employed for MBR. I would suggest a different metric, Comet, for a fair comparison.”**
It is indeed true that MBR decoding with metric-XYZ will be biased towards metric-XYZ, but we also observed that the machine translation community adopts MBR+BLEURT for two reasons:
- MBR+BLEURT correlates the most with Oracle Human MQM evaluation (Table 2, 3, last column [1]).
- BLEURT is one of the most accurate evaluation metrics (Table 2, [2]).
Our motivation was to compare binary vs float models on “one” of the most adopted model-based setups in translation research, so we chose this combination as part of the evaluation.
We also agree that showing one more neural metric (eg. COMET) would be a more comprehensive evaluation and we will attempt to do so in our final revision.
[1] Markus Freitag et al., High Quality Rather than High Model Probability: Minimum Bayes Risk Decoding with Neural Metrics. ACL’22.
[2] Tom Kocmi et al., Large Language Models Are State-of-the-Art Evaluators of Translation Quality. EAMT’23. | null | null | null | null | null | null |
Non-Convex Bilevel Optimization with Time-Varying Objective Functions | Accept (poster) | Summary: This paper studies bilevel optimization in an online setting where the objective in both the levels are allowed to vary with time, and the goal is to develop an algorithm with sublinear regret. This paper proposes a practical single-loop algorithm that updates the lower-level variable only once for each upper-level variable update. The lower-level variable is updated via vanilla SGD, while the hypergradient for the upper-level update is computed via some specified number of steps of Conjugate Gradient, and averaged over a window of specified size. This design significantly improves the computational and memory overhead relative to the existing baseline which requires access to all the past objectives and gradient oracles in the window; the proposed scheme just needs to maintain the hypergradients in the window. Based a novel notion of bilevel regret, the paper shows that the proposed algorithm is able to achieve sublinear regret under some standard assumptions for appropriately set algorithmic parameters and window sizes. The empirical evaluation on two online bilevel applications show that the proposed scheme can match the performance of the existing baseline while being significantly more efficient both in terms of time and memory requirement -- the computational advantage of the proposed scheme is enhanced when considering larger window sizes.
Strengths: **Practical online solution.**
In the online bilevel setting, it seems more practical to have a single-loop algorithm since the objectives/gradients for each of the level will probably be made available to the learner in a sequential alternate manner. However, single loop bilevel algorithms are usually harder to analyse. So it is a significant contribution to have a single loop algorithm that has sublinear regret.
**Intuitive presentation of theoretical analyses.**
The authors have presented the theoretical analyses in a very intuitive and clear manner. After specifying the necessary assumptions, the authors discuss the necessary steps to complete the analyses, and gradually build up to each of the results. This presentation is very easy to follow for a reader, and I really appreciate the work done for such a presentation. After the main theorem, the author clearly discuss the conditions under which we might achieve the desired sublinear regret.
**Strong empirical performance against baseline.**
The experimental results show clearly show that the proposed SOBOW matches the regret of the OAGD baseline, but is able to do so with significantly lower computational and memory overhead, and without access to the past objectives. The computational gains are very significant, with up to almost $20\times$ speedup. This is an impressive result, making the solution even more practically useful.
Weaknesses: **Hyperparameters in the definition of regret.**
One of the weaknesses of this paper is that the proposed novel notion of bilevel regret itself (equation (2)) seems to depend on the window size $K$ and the decay rate $\eta$ (and it is not quite clear what the subscript $w$ in $BLR_w(T)$ denotes). Given that the subsequent analysis shows that these quantities need to be set appropriately for the desired convergence rate, it is odd that the notion of regret itself depends on it. The algorithm can use such hyperparameters, but the term quantifying the regret should not depend on them. Is this standard in dynamic local regret analysis? One would expect that the bilevel local regret would be defined as some term such as $\sum_{t=1}^T || \nabla_x f_t(x_t, y_t^*(x_t)) ||^2$ or something similar, where we are computing the per-time-step (local) suboptimality, and we would want this quantity to grow sublinearly with $T$ (with appropriate assumptions regarding the relationship between $f_t, g_t$ and $f_{t+1}, g_{t+1}$). It seems as if such a definition of bilevel regret (equation (2)) was considered because it seems to match the form of the upper-level update that is used in the proposed algorithm.
**No dependence on lower-level suboptimality in the regret.**
Another issue with the considered notion of bilevel regret is that it is not clear why it is meaningful for the regret to depend on $(x_t, y_t^*(x_t))$ instead of just $(x_t, y_t)$ or $(x_t, y_{t+1})$. Alternately, it is not clear why the sub-optimality in the lower-level decision variable (that is, having $y_{t+1}$ instead of $y_t^*(x_t)$) does not contribute to the regret in any way. Analyses of static bilevel optimization usually establish convergence of $ || \nabla_x f(x_t, y_t^*(x_t)) ||^2$ **as well as** that of $|| y_{t+1} - y_t^*(x_t) ||^2$.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Given existing single-loop static algorithms such as TTSA [A] and STABLE [B], how is the proposed algorithm positioned against these? One of the challenges with single-loop schemes is that the upper-level updates need to be very slow (that is, have a small upper-level learning rate relative to the lower-level learning rate) if we are just using a single SGD step lower-level update. This is because, otherwise, it is hard to guarantee that $y_{t+1}$ converges to $y^*(x_t)$ since the $x_{t-1} \to x_t$ update can significantly move the lower-level target from $y^*(x_{t-1}) \to y^*(x_t)$, which a single SGD step is unable to catch up with. That is why, the more expensive but sophisticated lower-level update is proposed in STABLE, to allow for faster upper-level updates. Does this issue manifest in the proposed SOBOW, resulting in a need for a smaller upper-level learning rate (and thus slower convergence), or is there something in the nature of the online bilevel setup that mitigates this issue?
- Given that $\{Q_t, t \in [T]\}$ would be an increasing sequence, what is the motivation to not just solve the least-squares problem to sufficient optimality at each step and remove the error term in Lemma 5.5, and simplify the analysis?
>[A] Hong, Mingyi, et al. "A two-timescale stochastic algorithm framework for bilevel optimization: Complexity analysis and application to actor-critic." SIAM Journal on Optimization 33.1 (2023): 147-180.
>[B] Chen, Tianyi, et al. "A single-timescale method for stochastic bilevel optimization." International Conference on Artificial Intelligence and Statistics. PMLR, 2022.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors do explicitly discuss some limitations, and I do not anticipate any potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough reviews and constructive comments. We provide our response to your comments below. If our response resolves your concern, we would greatly appreciate it if you could consider increasing your score.
Q1: Hyperparameters in the definition of regret. Is it standard in dynamic regret analysis?
A1: Many thanks for the insightful comments. The window-smoothed regret definition is widely adopted in online nonconvex optimization for dynamic local regret analysis (e.g., E. Hazan et al., 2017; S. Aydore et al., 2019; N. Hallak et al., 2021; D. Tarzanagh and L. Balzano, 2022; Y. Huang et al., 2023; Z. Guan et al., 2023), and our definition follows exactly the same way as in S. Aydore et al., 2019 and D. Tarzanagh and L. Balzano, 2022, which also have both window size and $\eta$ in their definitions. The underlying ideas about designing such a local regret are as follows:
(1) It has been shown in the literature of online learning (e.g, E. Hazan et al., 2017) that this time-smoothing is indeed necessary for the definition of the local regret in the non-convex setup. Specifically, for any online algorithm, there exists an adversarial sequence of loss functions, which can force the local regret to be $\Omega(\frac{T}{K^2})$. Therefore, a sublinear regret cannot be achieved for $\sum_{t=1}^T ||\nabla_x f_t(x_t, y_t^*(x_t))||^2$ suggested by the reviewer (which corresponds to the case $K=1$). In practice, the average performance of a system is also a typical and intuitive notion that is commonly used to evaluate real-world applications. For example, under changing environments, such an average performance metric during a period is naturally adopted in time series forecasting problems (S. Aydore et al., 2019). In terms of the decaying rate $\eta$, it is reasonable to assign larger weights to the most recent functions, in the same way with, e.g., S. Aydore et al., 2019 and D. Tarzanagh and L. Balzano, 2022. Here the subscript $w$ in $BLR_w(T)$ just refers to the window-averaged local regret.
(2) A small time-smoothed gradient in expectation implies that the outer-level decision is becoming better and closer to the local optima for the outer-level optimization problem at each round.
- E. Hazan et al.. Efficient regret minimization in non-convex games. ICML, 2017.
- S. Aydore et al.. Dynamic local regret for non-convex online forecasting. NeurIPS, 2019.
- N. Hallak et al.. Regret minimization in stochastic non-convex learning via a proximal-gradient approach. ICML, 2021.
- D. Tarzanagh and L. Balzano. Online bilevel optimization: regret analysis of online alternating gradient methods. 2022.
- Y. Huang et al.. Online min-max problems with non-convexity and non-stationary. TMLR, 2023.
- Z. Guan et al.. Online nonconvex optimization with limited instantaneous oracle feedback. COLT, 2023.
Q2: No dependence on lower-level suboptimality in the regret.
A2: Thank you for your constructive comments. We have the following response to this question:
(1) Indeed, we have also thought about the reviewer’s suggestion for the regret definition at the beginning of this study. However, it turns out that the physical meaning of $f_t(x_t, y_t)$ or $f_t(x_t, y_{t+1})$ is not clear in bilevel optimization, whereas $f_t(x_t, y_t^*(x_t))$ is the true objective function in the outer level. And a small value of $f_t(x_t, y_t)$ or $f_t(x_t, y_{t+1})$ does not imply a small value of $f_t(x_t, y_t^*(x_t))$; namely, the bilevel problem requires to make $f_t(x_t, y)$ as small as possible particularly by $y_t^*(x_t)$ that minimizes the inner problem, not by any other values $y_t$ or $y_{t+1}$. Thus, minimizing the regret in terms of $f_t(x_t, y_t)$ or $f_t(x_t, y_{t+1})$ does not necessarily imply that $y_t$ or $y_{t+1}$ are near optimal with respect to inner loop problem, which then may not be desirable decision variables in the bilevel optimization problem at each step.
(2) In fact, the sub-optimality in the lower-level decision variable has contributed to the regret through the update of $x_t$ in Equation (6), which highly depends on the quality of $y_{t+1}$. Intuitively, when $y_{t+1}$ is a more accurate estimate of $y_t^*(x_t)$, the hypergradient estimation is more accurate and the outer-level $x_{t+1}$ will be a better decision, leading to a smaller regret.
Q3: Does SOBOW need for a smaller upper-level learning rate?
A3: We do need a smaller learning rate for the upper-level problem. As shown in Theorem 5.7, the upper-level learning rate $\beta$ is in the order of $o(\alpha^2)$, where $\alpha$ is the lower-level learning rate.
Q4: What is the motivation to not just solve the least-squares problem to sufficient optimality at each step and remove the error term in Lemma 5.5, and simplify the analysis?
A4: In practice, solving the problem for $v_t^*$ to sufficient optimality can introduce high computational cost at each step, which can significantly slow down the learning process. In contrast, the computational cost can be substantially reduced without requiring high estimation accuracy of $v_t^*$ at each step in our work. Further, note that in (K. Ji et al., 2022) it is shown that the value of $Q_t$ for solving the problem for $v_t^*$ indeed has a relatively weak impact on the overall performance. For example, in (K. Ji et al., 2022), the cases between $Q=1$ and $Q=20$ have very similar performance and also similar running time. This implies that the computational cost introduced by the increasing $Q_t$ is actually negligible. Therefore, in this work we consider the sub-optimality of $v_t$ and take the estimation error for $v_t^*$ into consideration.
- K. Ji, et al.. Will bilevel optimizers benefit from loops? NeurIPS, 2022.
We thank the reviewer again for your insightful comments. Again, if our response resolves your concern, we will appreciate it very much if you could consider increasing the score. We will also be very happy to answer any further questions you may have.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer ouS8,
Since the author-reviewer discussion period has started for one week, and will end very soon. Could you please check our response at your earliest convenience? This way, if you have further questions, we will still have time to respond before the discussion period ends. We thank the reviewer very much in advance for your time and efforts.
---
Rebuttal 2:
Comment: Dear Reviewer ouS8: can you read the authors' response, and see if your comments are addressed?
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer ouS8,
We would like to bring to your attention that your individual ratings on Soundness (3 good), Presentation (4 excellent) and Contribution (3 good) are not consistent with your final rating 3 of our paper. Your review seems to also suggest that you highly favor this paper, as reflected by your comments such as “So it is a significant contribution to have a single loop algorithm that has sublinear regret”; “The computational gains are very significant”; “This is an impressive result, making the solution even more practically useful”, etc. Your comments on the weakness part seem to have only clarification questions, which we believe we have provided convincing answers in our response.
Could you please reconsider your final rating of the paper to make it aligned with your review and our response to your concerns. Of course, if you have any further questions, we would be very happy to address them.
Thank you very much for your time and efforts! | Summary: The authors consider bilevel optimization in the online setting. In this setting, we have access at iteration $t$ to the outer function $f_t$ which is assumed to be differentiable and possibly nonconvex. We also have access to the inner function $g_t$ which is assumed to be twice differentiable and strongly-convex with respect to the inner variable $y$. They propose SOBOW, an algorithm that implement approximate implicit differentiation in a single loop fashion. In SOBOW, at iteration $t$, the inner variable is updated by a gradient step and the solution of the linear system involved in the expression of the hypergradient is approximated by Conjugate Gradient steps. Then, the obtained approximate hypergradient is stored and the outer variable $x$ is updated following the opposite direction given by an average of the $K-1$ last approximate hypergradient computed. The outer variable is then projected on the constrains set $\mathcal{X}$.
The authors show that SOBOW achieves a sublinear local regret.
SOBOW is numerically compared with OGD and OAGD on an online hyper-representation learning task using a simulated dataset, and on an online hyperparameter optimization problem using the 20newsgroups dataset.
Strengths: * The paper is clearly written
* The authors study online bilevel optimization which has been very little studied in the literature.
* The proposed method is theoretically grounded
* The method improves upon previous work by avoiding the evaluation of the previous functions at the current iterates.
* Numerical validation is provided on several tasks.
Weaknesses: * The idea of single-loop updates was already exploited in offline context [1, 2, 3]. The authors should mention it.
* Since the authors consider a projection onto $\mathcal{X}$, this set has to be assumed closed.
* Under a closedness assumption of $\mathcal{X}$ and $\mathcal{X}$ being assumed to be bounded, the boundedness of $\nabla f$ is automatic making assumption 5.4 unecessary.
* In terms of notations, the notation $\nabla f_t(x, y^*(x))$ is confusing because it can be thought as the gradient of the function $f_t$ evaluated at the point $(x, y^*(x))$ or as the gradient of the function $x\mapsto f_t(x,y^*(x))$. Maybe it should be clearer to give a name to the function $x\mapsto f_t(x,y^*(x))$.
[1] M. Hong, H.-T. Wai, Z. Wang, and Z. Yang. A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis and Application to Actor-Critic. arXiv:2007.05170, 2021
[2] M. Dagréou, P. Ablin, S. Vaiter, and T. Moreau. A framework for bilevel optimization that enables stochastic and global variance reduction algorithms. NeurIPS, 2022.
[3] J. Li, B. Gu, and H. Huang. A Fully Single Loop Algorithm for Bilevel Optimization without Hessian Inverse. AAAI, 2022
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: NA
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough reviews and constructive comments. We provide our response to your comments below.
Q1: The idea of single-loop updates was already exploited in offline context [1, 2, 3]. The authors should mention it.
A1: Thank you for bringing up these studies. We will add them in the related work in the revision.
Q2: Since the authors consider a projection onto $\mathcal{X}$, this set has to be assumed closed.
A2: Thank you for pointing out this missing statement. We will change the statement to “the closed convex set $\mathcal{X}$” in Assumption 5.3.
Q3: Under a closedness assumption of $\mathcal{X}$ and $\mathcal{X}$ being assumed to be bounded, the boundedness of $\nabla f$ is automatic making assumption 5.4 unnecessary.
A3: Many thanks for the good suggestion. We will change this assumption to a statement.
Q4: In terms of notations, the notation $\nabla f_t(x, y^*(x))$ is confusing because it can be thought as the gradient of the function $f_t$ evaluated at the point $(x, y^*(x))$ or as the gradient of the function $x \rightarrow f_t(x, y^*(x))$. Maybe it should be clearer to give a name to the function $x\rightarrow f_t(x, y^*(x))$.
A4: Thank you for the constructive suggestion. We will define $\Phi_t(x): x\rightarrow f_t(x, y_t^*(x))$ in the revision.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for your answer and corrections. | Summary: This paper proposed a new method for solving online bilevel problem that only required one-step $y$ update and leveraged the historical information to smooth the update. Extensive experiments are provided to validate their theories.
Strengths: 1. This work is the second one considering the online bilevel optimization problem and this problem can motivate many applications.
2. The algorithm does not require multiple inner updates and the evaluation of the current models on previous functions, so that it is more applicable to online setting.
Weaknesses: 1. One of the challenges unique to online bilevel optimization, as compared to its offline counterpart in this paper, lies in controlling the hypergradient estimation error, which is dependent on $ ||y_t^*(x_t)-y_{t+1}^*(x_{t+1})||^2$. Unlike in the offline case, it cannot be simply bounded by $|| x_{t+1} -x_t ||$ due to the time-variant nature of $g_t$. It is impossible to control this term without making variation assumption on the lower-level objective, so it is intriguing to see how to regulate this term by bounded variational assumption of $g_t$. However, Theorem 5.7 seems to circumvent this challenge by directly converting the hypergradient estimation error term to $V_T$ and $H_T$ — terms over which we cannot directly control. Is it possible to characterize these two terms explicitly by the variation of $g_t$? This would offer more insights into the effect on the overall online bilevel optimization.
2. The state-of-the-art work on offline bilevel optimization also adopts the three-level optimization and treats $v$ as a solution to a quadratic problem and thus can eliminate the conjugate gradient loop. As one of the contributions of this work is to reduce the multiple-step lower-level updates to one-step, it is also intriguing to see whether the conjugate gradient loop can be reduced since it is also time-consuming.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Could the bounded function value in Assumption 5.3 be relaxed to merely on the feasible set? In this way, it can be derived from the bounded feasible set and Lipschitz continuity assumptions. Otherwise, bounded function value on the whole space is relatively restricted. Also, does the objective in the experiment part satisfy this assumption?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough reviews and constructive comments. We provide our response to your comments below. If our response resolves your concern, we would greatly appreciate it if you could consider increasing your score.
Q1: Is it possible to characterize these two terms explicitly by the variation of $g_t$? This would offer more insights into the effect on the overall online bilevel optimization.
A1: Thank you for this insightful comment. We have the following response to this question:
(1) Indeed as suggested by the reviewer, it is possible to explicitly analyze the regret in terms of the variation of $g_t$. For example, in terms of $H_T$, based on the strong convexity of the function $g_t$, we can further bound $||y_t^*(x_t)-y_{t+1}^*(x_t)||^2$ from above based on the function variation $\sup_y |g_t(x_t, y)-g_{t+1}(x_t, y)|$; in terms of $V_T$, based on the Lipschitz continuity of function $f$, we can upper bound $f_{t+1}(x, y_{t+1}^*) - f_t(x, y_t^*(x))$ based on $||y_t^*(x)-y_{t+1}^*(x)||^2$ and the function variation of $f$, i.e., $\sup_y [f_{t+1}(x, y)-f_t(x,y)]$, where the first term can be further bounded above by the function variation of $g$. We will discuss this and have a more detailed investigation in the revision.
(2) In online bilevel optimization, the variation of $y_t^*(x)$ is more important since it directly affects the outer-level objective functions. Further, for strongly convex inner-level functions $g_t$, when the variation of $g_t$ is small, i.e., $|g_t(x, y)-g_{t+1}(x, y)|$ is small, the gap between $y_t^*(x)$ and $y_{t+1}^*(x)$ will not be large; when the function value $g_t$ changes significantly, as long as the variation of $y_t^*(x)$ is small, our algorithm can still guarantee a small regret. In this sense, the condition on the variation of $y_t^*(x)$ is weaker compared to the condition on the variation of $g_t$ in order to achieve a small regret.
(3) Using path-length regularization to capture the variation of optimal decision variables is very common in the literature of dynamic online learning, e.g.,
- M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. ICML, 2003.
- A. Jadbabaie et al.. Online optimization: competing with dynamic comparators. AISTATS, 2015.
- A. Mokhtari et al.. Online optimization in dynamic environments: improved regret rates for strongly convex problems. CDC, 2016
- T. Yang et al.. Tracking slowly moving clairvoyant: optimal dynamic regret of online learning with true and noisy gradient. ICML, 2016.
- L. Zhang et al.. Improved dynamic regret for non-degenerate functions. NeurIPS, 2017.
- P. Zhao et al.. Dynamic regret of convex and smooth function. NeurIPS, 2020.
Note that because this variation of optimal decision variables is not controllable, we do not use this term in the design of the algorithm. Rather, the variation term is only used in the theoretical analysis to understand which factors in the system lead to a tighter bound on the regret.
Q2: It is also intriguing to see whether the conjugate gradient loop can be reduced since it is also time-consuming.
A2: We are not very sure about the question the reviewer asked.
(1) If the reviewer refers to directly using the closed form solution of $v^*$, this solution involves the calculation of Hessian inverse which is computationally expensive.
(2) If the reviewer refers to using only one step of conjugate gradient to estimate $v^*$, this is doable for offline bilevel optimization with time-invariant objective functions, where slow changes of the variables can still make progress to solve the optimization problem. However, in online bilevel optimization, since the estimation error of $v_t^*$, i.e., $||v_t^Q-v_t^*||$, depends on the estimation error of $v^*_{t-1}$ in the last round and the variation of $v_t^*$, i.e., $||v_{t-1}^*-v_t^*||$, we need to make sure that $||v_{t-1}^*-v_t^*||$ will decay with $t$ in order to achieve a sublinear regret with one-step conjugate gradient. In the offline case, $||v_{t-1}^*-v_t^*||$ only depends on $||x_{t-1}-x_t||$ which can decay gradually; however, in the online case, $||v_{t-1}^*-v_t^*||$ also depends on the function variations, such that additional conditions may be needed for achieving a sublinear regret. To summarize, reducing the number of conjugate gradient steps is an interesting open problem, which is beyond the scope of this paper, but one that we plan to investigate for future work.
Q3: Could the bounded function value in Assumption 5.3 be relaxed to merely on the feasible set? Also, does the objective in the experiment part satisfy this assumption?
A3: (1) Yes, Assumption 5.3 can be relaxed that the function value on $(x, y^*(x))$, i.e., $f_t(x, y^*(x))$, is bounded, which holds given the bounded feasible set, Lipschitz continuity and also the bounded set of $y_t^*(x)$. Here $y^*(x)$ is generally assumed to be bounded in bilevel optimization such that the lower-level problem can be solved to a certain accuracy. (2) Yes, the objective in the experiments satisfies this assumption. For example, in the online hyper-representation learning problem, since both the value of data samples and the feasible set of the decision variables are bounded, the function values are bounded.
We thank the reviewer again for your insightful comments. Again, if our response resolves your concern, we will appreciate it very much if you could consider increasing the score. We will also be very happy to answer any further questions you may have.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarification; it has addressed some of my questions. I think that controlling the term $\\|y_t^*\left(x_t\right)-y_{t+1}^*\left(x_t\right)\\|^2$ by bounded variational assumption of $g_t(x,y)$ remains crucial to online bilevel optimization. As you also concur with its potential solution, it might be better to incorporate a more detailed discussion and a rigorous theory on this topic in the current version.
Regarding Q2, I'm referencing the fully single-loop techniques [1]-[3] in offline bilevel optimization, where $v$ is treated as another optimization variable, akin to $y$. Given your strategy to reduce the number of loops for optimizing $y$, could a similar approach be applied to $v$?
[1] A Fully Single Loop Algorithm for Bilevel Optimization without Hessian Inverse. AAAI 2022.
[2] A framework for bilevel optimization that enables stochastic and global variance reduction algorithms. NeurIPS 2022.
[3] Amortized Implicit Differentiation for Stochastic Bilevel Optimization. ICLR 2022.
---
Reply to Comment 1.1.1:
Title: Response to the further comment
Comment: Q1: It might be better to incorporate a more detailed discussion and a rigorous theory on this topic in the current version.
A1: Thank you for the advice. We have developed the following theorem by using the function variations:
**Theorem. Suppose that Assumptions 5.1-5.4 hold. Let $V_g=\sum_{t=1}^T \sup |g_{t+1}(x, y)-g_{t}(x, y)|$ and $V_f=\sum_{t=1}^T \sup [f_{t+1}(x, y)-f_t(x, y)]$. Under the same conditions on $\lambda$, $\alpha$, $Q_t$, $\eta$ and $\beta$ with Theorem 5.7, we can have $BLR_w(T)\leq O\left(\frac{T}{\beta W} + \frac{V_f}{\beta} + V_g + \frac{\sqrt{T V_g}}{\beta} \right)$.**
A more detailed analysis is as follows:
(1) For $H_{2,T}$, based on the strong convexity of $g_t$, we can show that $||y_{t+1}^*(x)-y_{t}^*(x)||^2 \leq \frac{2}{\mu_g} \sup |g_{t+1}(x, y)-g_{t}(x, y)|$;
(2) For $V_{1,T}$, we can show that $f_{t+1}(x, y^*_{t+1}(x))-f_t(x,y_t^*(x))\leq L_0 ||y^*_{t+1}(x)-y_t^*(x)||+\sup [f_{t+1}(x, y)-f_t(x, y)]$, such that Line 667 (Lemma G.3) in Appendix can be upper bounded by $\frac{2MT}{W}+L_0\sqrt{\frac{2T}{\mu_g}}\sqrt{\sum_{t=1}^T \sup |g_{t+1}(x, y)-g_{t}(x, y)|}+ \sum_{t=1}^T \sup [f_{t+1}(x, y)-f_t(x, y)]$;
(3) Based on these, if we denote $V_g=\sum_{t=1}^T \sup |g_{t+1}(x, y)-g_{t}(x, y)|$ and $V_f=\sum_{t=1}^T \sup [f_{t+1}(x, y)-f_t(x, y)]$ to capture the function variations, we can have the overall regret as $O\left(\frac{T}{\beta W} + \frac{V_f}{\beta} + V_g + \frac{\sqrt{T V_g}}{\beta} \right)$. In this case, a sublinear regret will be achieved if both $V_g$ and $V_f$ are $o(T)$ for suitably selected $W$. As mentioned in our previous response, the condition on the variation of $y_t^*(x)$ is weaker compared to the condition on the variation of $g_t$ in order to achieve a small regret. For example, suppose $W= \omega(T)$ and the function variation of $f_t$ is very small, to achieve a regret of $O(T^{3/4})$, $H_{2,T}=O(T^{3/4})$ is sufficient, while we need a stricter condition on the variation of $g_t$, i.e., $V_g=O(T^{1/2})$.
We will add the theorem and more detailed discussions on this in the final version per the reviewer’s suggestion. Once again, we do not use the terms $H_{2,T}$ and $V_{1,T}$ in the algorithm. Rather, the variation term is only used in the theoretical analysis to understand which factors in the system lead to a tighter bound on the regret.
Q2: Given your strategy to reduce the number of loops for optimizing $y$, could a similar approach be applied to $v$?
A2: This does not appear to be the case because the objective functions in the offline setting such as in [1]-[3] are **time-invariant**, while the objective functions in the online setting (such as ours) are **time-varying**. Hence, in the offline setting, it is easier to control the error even with only one step conjugate gradient estimate of $v^*$ because of the offline time-invariant setup. In contrast, because of the time-varying nature of the objective function in the online setting, controlling the error with only one step conjugate gradient becomes extremely difficult. More details are provided below to explain this difficulty.
(1) In the current work, we seek to reduce the number of steps for updating $y_t$ so that our algorithm can also work under limited knowledge of the function $g_t$. But this can result in a large estimation error for the hypergradient at each step. To control this error, we carefully control the estimation errors of $y_t^*$ and $v_t^*$ together, such that the summation of $||y_t^*(x_t)-y_{t+1}||^2$ and $||v_t^*-v_t^Q||^2$ will decay in order to achieve a sublinear regret under function variations. We achieve this by increasing the estimation accuracy for $v_t$ (note that this does not require more information about the function $g_t$), which compensates the large estimation error of $y_t^*$ due to a single step update.
(2) When further reducing the number of update steps for $v_t$, we still need to jointly control the estimation error of $y_t^*$ and $v_t^*$. But the strategy we take above doesn’t work anymore since the estimation error of $v_t^*$ is also large. Besides, the warm start strategy used in offline bilevel optimization will not work due to the time-varying functions in online bilevel optimization. In particular, the estimation error of $v_t^*$ depends on both the update of $x_t$, $y_t^*$ and the function variations including both outer-level function $f_t$ and inner-level function $g_t$. This will largely complicate the analysis and make achieving a sublinear regret highly nontrivial. Investigating this problem is very interesting but is worth to be considered as an independent future work.
Finally, if our response resolves your concerns to a satisfactory level, we kindly ask the reviewer to reconsider raising the score of your evaluation. Certainly, we are more than happy to address any further questions that you may have during the discussion period. We thank the reviewer again for the helpful comments and suggestions for our work. | Summary: This work studies the online bilevel optimization problem with nonstationary and time-varying objective functions. This line of research can cover applications with online nature like online meta-learning, online hyperparameter tuning, wireless networks. Compared to widely studied offline bilevel problem, the studied one has challenges like limited information, hypergradient computation, changing objectives. The authors propose a single-loop online bilevel optimizers called SOBOW based on online nonconvex optimization and window averaging, and further show it can attain a sublinear regret. Some experiments are provided to justify the effectiveness of the proposed method.
Strengths: 1. Bilevel optimization has been studied intensively mainly in the offline setting where all objective functions are fixed and known. It has been much less explored in the online and nonstationary setting. This work seems to provide a simple and good solution.
2. The authors have done a good job in discussing the underlying challenges and the drawbacks of existing method in [54]. How to design a good online bilevel optimizer with limited queries, efficient hypergradient computation, guaranteed regret turns out to be nontrivial.
3. Technically, this work needs to cope with 1) the intersection among three variables $x,y,v$ in online manner, 2) time-smoothed gradient updates, 3) biased gradient estimation, which may be straightforward.
Weaknesses: 1. The hypergradient estimation contains second-order derivatives. Will them cost a lot in practical online settings? Is it possible to design fully first-order method without matrix-vector computations? There are some recent progress towards the Hessian-free bilevel optimization.
2. The authors do not compare the rate of their method with OAGD.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough reviews and constructive comments. We provide our responses to your comments below.
Q1: Is it possible to design fully first-order methods without matrix-vector computations?
A1: Many thanks for the insightful comments and suggestions. Our current algorithm involves matrix-vector computations to deal with second-order derivatives, which is fairly efficient in our experiments. As the reviewer suggested, the computational cost of the algorithm can be further reduced via fully first-order methods or Hessian-free design. We describe these ideas as follows.
To the best of our knowledge, the recent first-order approaches mainly follow two strategies: (1) replace the second-order term in the hypergradient estimation with zeroth-order estimations; (2) reformulate the bilevel problem to a single-level constrained optimization problem and use first-order methods to solve the reformulated problem.
For the first strategy, applying it to our approach is straightforward by replacing the second-order term in online hypergradient via their zeroth-order estimations. For the second strategy, we need to reformulate the bilevel problem at each round to a constrained optimization problem, and then develop an online algorithm for solving such a reformulated nonconvex constrained problem. The primal-dual methods can be leveraged to solve such a constrained problem. Particularly, a recently developed single-loop algorithm for nonconvex constrained problems in (S. Lu et al., 2022) can be leveraged to develop a single-loop online algorithm and analyze its regret performance.
- S. Lu et al.. A single-loop gradient descent and perturbed ascent algorithm for nonconvex functional constrained optimization. ICML, 2022.
Q2: The authors do not compare the rate of their method with OAGD.
A2: Thanks for the question. We have the following response to this question:
(1) Since our work uses a different regret definition from that in OAGD, it would not be fair for a direct comparison between the regret bounds.
(2) Instead, as seen in Figure 1, we compare the performance of our algorithm and OAGD using our definition of regret. And we have also conducted experiments to compare the performance between our algorithm and OAGD using the regret defined in OAGD (Figure 2 in Appendix). In both cases, we can see that our algorithm achieves comparable regret performance with OAGD, but with a much shorter runtime.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. It clarifies my questions. I remain the score. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Self-Weighted Contrastive Learning among Multiple Views for Mitigating Representation Degeneration | Accept (poster) | Summary: This paper discusses the limitations of contrastive learning (CL) in multi-view scenarios and proposes a novel framework called SElf-weighted Multi-view contrastive learning (SEM) to address these limitations. The contributions of SEM framework are as follows:
- Alleviating representation degeneration: In multi-view scenarios, CL can lead to representation degeneration when the collected views have inconsistent semantic information or lack sufficient discriminative information. SEM aims to mitigate this issue by adaptively strengthening useful pairwise views and weakening unreliable pairwise views through a self-weighted contrastive loss.
- Regularizing hidden features: SEM introduces a self-supervised reconstruction term to regularize the hidden features of encoders. This regularization assists CL in accessing sufficient discriminative information from the data.
- Extensive experimental validation: Experiments on public multi-view datasets demonstrate that SEM effectively mitigates representation degeneration in existing CL methods and leads to significant performance improvements. Ablation studies further verify the effectiveness of SEM with different options of weighting strategies and reconstruction terms.
Strengths: - The proposal of SEM is well-motivated: SEM tackles the issue of representation degeneration in multi-view scenarios where inconsistent semantic information and insufficient discriminative information exist.
- Flexibility in weighting strategies: SEM provides three options for implementing the weighting strategy, including class mutual information, JS divergence, and maximum mean discrepancy.
- Significant performance improvements and thorough ablation studies: Experimental results on 5 public multi-view datasets, component ablation studies and hyper-parameter analysis demonstrate that SEM effectively mitigates representation degeneration in existing contrastive learning (CL) methods.
Weaknesses: - I have some concerns regarding the rationale behind introducing the reconstruction module. If the aim of incorporating reconstruction is to enhance the discriminative information of the representation, why does the process involve an additional encoder after obtaining the representation H, instead of directly reconstructing the representation z? It would be beneficial to provide more details in the experimental results regarding the position of the reconstruction within the overall encoder and further analyze the relationship between discriminative information and the resulting performance improvement in both the method and experimental sections, thus solidifying the framework.
- (minor) Based on the experimental findings, it appears that the enhancement from the reconstruction regularization is more pronounced compared to the improvement stemming from the self-weighted module. Why the reconstruction leads to a greater improvement needs a more detailed explanation in the experimental section. Besides, in terms of the method's naming, it would be beneficial to highlight the concept of the reconstruction component.
- (minor) The presentation and captions of Figures 1 and 5 can be further improved, and it would be ideal to summarize the conclusions and connections among the data displayed in the three figures.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: This method utilizes a number of models equal to the scale of the views, and it also requires calculating pairwise weights, which significantly increases the memory usage and training time as the number of views increases. The authors seem to lack detailed comparative figures with previous methods in this regard.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: This paper does not include sufficient discussions about the computational cost and memory overhead, which are suggested to be included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response To Reviewer tV6r:
>Q1: I have some concerns regarding the rationale behind introducing the reconstruction module. If the aim of incorporating reconstruction is to enhance the discriminative information of the representation, why does the process involve an additional encoder after obtaining the representation $\mathbf{H}^v$, instead of directly reconstructing the representation $\mathbf{Z}^v$?
Thanks for raising this concern. As we all know, data in different views usually contain useful discriminative information for common semantics as well as semantic-irrelevant information. We introduce the reconstruction module on $\mathbf{H}^v$ to avoid that $\mathbf{H}^v$ loses the useful discriminative information of data (here, $\mathbf{H}^v$ also maintains some semantic-irrelevant information due to the information reconstruction). Then, contrastive learning on $\mathbf{Z}^v$ can make $\mathbf{Z}^v$ access sufficient discriminative information from $\mathbf{H}^v$ to explore the common semantics of multiple views. Nevertheless, if the reconstruction module is punished on $\mathbf{Z}^v$, $\mathbf{Z}^v$ also retains the semantic-irrelevant information which might disturb $\mathbf{Z}^v$ to explore the common semantics of multiple views. In this regard, we conduct experiments to investigate the different reconstruction positions and report the clustering accuracy as follows.
|DHA | CCV | NUSWIDE | Caltech | YoutubeVideo
----|----|----|----|----|----
Reconstruction on $\mathbf{H}^v$ |80.9 |39.4 |60.4 |87.2 |31.3
Reconstruction on $\mathbf{Z}^v$ |72.4 |27.8 |60.1 |86.6 |30.9
We can observe that the results of the reconstruction on $\mathbf{Z}^v$ have some performance degeneration (especially on DHA and CCV datasets), compared with the results of the reconstruction on $\mathbf{H}^v$. Therefore, the reconstruction of our framework is punished on $\mathbf{H}^v$ instead of $\mathbf{Z}^v$, for reducing the interference of semantic-irrelevant information to the contrastive learning on $\mathbf{Z}^v$.
>Q2: Why the reconstruction leads to a greater improvement needs a more detailed explanation in the experimental section. Besides, in terms of the method's naming, it would be beneficial to highlight the concept of the reconstruction component.
If there are no supervisory signals during data processing, useful discriminative information might lose before contrastive learning. The reconstruction regularization makes the hidden features maintain the discriminative information of data. Then, contrastive learning can access the discriminative information only when they are not lost. Information is well transmitted before it is used and thus the reconstruction leads to great improvements. The self-weighted module and the reconstruction regularization are both important for our method. Hence, we would like to rename our method as "Self-weighted multi-view contrastive learning with reconstruction regularization".
>Q3: The presentation and captions of Figures 1 and 5 can be further improved, and it would be ideal to summarize the conclusions and connections among the data displayed in the three figures.
We will improve the presentation and captions of figures. To be specific, the visualization of Figures 2 and 5 are both carried out on the Caltech dataset. Figure 2 shows the representation learning process of traditional contrastive learning, and Figure 5 displays that of our proposed SEM whose framework is illustrated in Figure 1. For example, the useful pairwise views could be view 4 and view 5 and their contrastive learning is strengthened, and the unreliable pairwise views could be view 1 and view 4 and their contrastive learning is weakened.
>Q4: This method utilizes a number of models equal to the scale of the views, and it also requires calculating pairwise weights, which significantly increases the memory usage and training time as the number of views increases. This paper does not include sufficient discussions about the computational cost and memory overhead, which are suggested to be included.
Thanks for this suggestion. To achieve multi-view contrastive learning, previous methods (such as CMC, DCP, MFLVC, and DSIMVC) also need different models to transform different views into the same form, as multi-view data typically involve heterogeneous data forms. Since the mini-batch optimization is adopted, the computational cost of our method (and comparison methods) is linear to the sample size. Additionally, the number of views for multi-view data is often no more than 6 in practical scenarios, which will not increase the unaffordable memory usage and computational burden when calculating pairwise weights. We have put the complexity analysis in Appendix A (Page 7). Additionally, the practical time cost of our method is also shown in Table 3 in Appendix C (Page 8). | Summary: This paper researches the representation degeneration of multi-view contrastive learning. To address it, this paper proposes a simple but effective framework of self-weighted multi-view contrastive learning.
Strengths: ++The manuscript is well-written and self-consistent. For example, the visualization analysis makes it easy for the reader to understand that considering view differences is necessary for multi-view data.
++The motivation of this work is reasonable, and there are sufficient ablation experiments and theoretical analyses to verify the effectiveness of the proposed SEM algorithm.
++SEM is a general framework that can be adapted to a variety of existing contrastive learning losses, as well as to a variety of autoencoder models.
Weaknesses: --Figure 5(a) shows that weights are updated dynamically in different iterations. Here, are the weights incrementally and linearly increased or are they only updated 4 times?
--The class mutual information weighting strategy is an interesting method that aims to easily measure the discrepancy between views, which compresses the representative class-semantic information of features into one-hot labels. As thus, Eq. (5) seems to be missing some constraints of $\mathbf{Y}^{v*}$ of being the labels.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: In Figure 5(a), are the weights incrementally and linearly increased or are they only updated 4 times?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors adequately addressed the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response To Reviewer dnwM:
>Q1: Figure 5(a) shows that weights are updated dynamically in different iterations. Here, are the weights incrementally and linearly increased or are they only updated 4 times?
We are sorry that the illustration of Figure 5(a) is not clear enough. The weights are only updated 4 times during training. The line between the two points just shows their change tendency.
>Q2: The class mutual information weighting strategy is an interesting method that aims to easily measure the discrepancy between views, which compresses the representative class-semantic information of features into one-hot labels. As thus, Eq. (5) seems to be missing some constraints of $\mathbf{Y}^{v*}$ of being the labels.
Thanks for this valuable comment. We will complete the constraints of Eq. (5) that make $\mathbf{Y}^{v*}$ represent one-hot labels, i.e., $s.t. \sum_{j}^{K} y_{ij}^v = 1, y_{ij}^v \in \mathbf{Y}^{v}$.
---
Rebuttal Comment 1.1:
Comment: Thanks for the efforts in the response. All my concerns have been addressed. | Summary: In this paper, the authors show that the representation degradation could limit the application of contrastive learning in multi-view scenarios. To mitigate this issue, they propose the self-weighted multi-view contrastive learning, a general framework that has different options in the contrastive loss, weighting strategy, and reconstruction term. In my opinion, this paper does a good job of showing what is emphasized, with many strengths. I've listed them below, along with some possible weaknesses.
Strengths: #1. The manuscript flows smoothly and is easy to understand. The analysis in the manuscript is well-thought-out and facilitates understanding the motivation and methodology.
#2. Authors show a new multi-view contrastive learning framework that considers handling representation degeneration via self-weighting and information reconstruction.
#3. The framework is technically sound. The manuscript provides three variants of weighting strategy including class mutual information, Jensen-Shannon divergence, and maximum mean discrepancy, and they are with different advantages.
#4. The framework helps existing contrastive learning methods, like InfoNCE, achieve significant performance improvements in multi-view scenarios.
#5. Sufficient ablation experiments and implementation details are provided.
Weaknesses: #1. There is a technical detail that needs to be clarified. SEM adaptively strengthen contrastive learning between the useful pairwise views and also weaken contrastive learning between the unreliable pairwise views. This process can be conducted without supervision by the designed self-weighting framework. But after training, we still don't know which of the representations of multiple views is a good representation. Are the representations learned from useful pairwise views artificially selected to evaluate the performance of downstream clustering tasks?
#2. It would be better for related work to include some recent jobs of contrastive learning.
#3. The abbreviation should be interpreted, such as SIFT, STIP, and MFCC.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: See Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed social impacts and limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response To Reviewer USXC:
>Q1: Are the representations learned from useful pairwise views artificially selected to evaluate the performance of downstream clustering tasks?
No. To comprehensively evaluate the performance, our experiments use the concatenation of all learned representations from different views.
>Q2: It would be better for related work to include some recent jobs of contrastive learning.
Thanks for this comment. We will further improve the manuscript by discussing more recently published work.
>Q3: The abbreviation should be interpreted, such as SIFT, STIP, and MFCC.
Thanks for the suggestion. We add the following new table to interpret the abbreviations used in this paper.
abbr. | Meaning
----|----|
SIFT | Scale-Invariant Feature Transform
STIP | Space-Time Interest Points
MFCC | Mel Frequency Cepstral Coefficents
CENTRIST | Census transform histogram
HOG | Histogram of Oriented Gradient
LBP | Local Binary Pattern
---
Rebuttal 2:
Comment: Thanks for the response, I agree with the novelty of this work and keep my accept decision. | Summary: This work proposed the SEM: SElf-weighted Multi-view contrastive learning framework, which first performs discrepancy measures on representations, and then obtains weights to assist adaptive contrastive learning. Meanwhile, the decoders are leveraged to avoid losing discriminative information. Extensive experiments are conducted to verify the effectiveness of SEM and its different implementation options.
Strengths: In unsupervised environments, it is hard but crucial for multi-view learning to automatically know which view’s features are with useless noise and which view’s features contain useful semantic information. I believe the idea of pair-wise self-weighted contrastive learning is novel. The proposed method obtains the adaptive ability to quality differences among multiple views, and it doesn't require much prior knowledge.
Moreover, multi-view representation learning is important for working with multi-view data as raw views usually have inconsistent semantic meaning when one considers them in practical applications. In this paper, the authors present a robust multi-view contrastive learning method, whose effectiveness is verified by extensive experiments.
Weaknesses: (1) To reduce losing information, the proposed method treats the last layer of encoders as hidden features. It's not clear exactly which layer is as H, and which layer is as Z.
(2) When using the weighting strategies, the framework needs reconstruction pretraining to obtain meaningful representations. The decoder is added to the framework as an auxiliary module. As far as I known, many self-supervised multi-view methods also use autoencoders as the main representation learning module. Some ablation experiments are expected.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can the authors show the performance of representation learning relying solely on autoencoders to understand the individual contributions between contrastive learning and reconstruction objectives?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This research is not expected to introduce new negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response To Reviewer tvpR:
>Q1: To reduce losing information, the proposed method treats the last layer of encoders as hidden features. It's not clear exactly which layer is as H, and which layer is as Z.
Thanks for this valuable comment. We present the network setting in Appendix B (Page 7). Specifically, if we think of the network from $\mathbf{X}^v$ to $\mathbf{Z}^v$ as an encoder, then $\mathbf{Z}^v$ is the output of the last layer of the encoder, and $\mathbf{H}^v$ is the output of the penultimate layer of the encoder. We will further clarify this descriptions in the final paper.
>Q2: As far as I known, many self-supervised multi-view methods also use autoencoders as the main representation learning module. Some ablation experiments are expected.
We conduct additional ablation experiments and report the clustering accuracy tested on the learned representations in the following table.
|DHA | CCV | NUSWIDE | Caltech | YoutubeVideo
----|----|----|----|----|----
vanilla AE|69.2 |14.3 |38.7 |86.0 |20.0
DCP |69.8 |24.1 |48.1 |69.6 |14.0
MFLVC |70.7 |31.6 |55.9 |77.1 |18.3
DSIMVC |63.8 |31.8 |56.7 |76.9 |18.9
SEM w/ AE |80.9 |39.4 |60.4 |87.2 |31.3
We can observe that, previous AE-based methods (i.e., DCP, MFLVC, and DSIMVC) do not obtain better results than the vanilla AE on some datasets (e.g., Caltech and YoutubeVideo). In comparison, our proposed SEM w/ AE achieves better performance, as it incorporates the self-weighted contrastive learning to handle the representation degeneration.
>Q3: Can the authors show the performance of representation learning relying solely on autoencoders to understand the individual contributions between contrastive learning and reconstruction objectives?
Yes. We conduct ablation experiments on three kinds of autoencoders and report clustering accuracy tested on the learned representations as follows.
|DHA | CCV | NUSWIDE | Caltech | YoutubeVideo
----|----|----|----|----|----
AE w/o SEM |69.2 |14.3 |38.7 |86.0 |20.0
DAE w/o SEM |70.4 |12.7 |39.5 |86.4 |21.7
MAE w/o SEM |70.0 |14.6 |35.8 |86.2 |22.8
SEM w/o AEs |60.5 |28.7 |57.7 |79.4 |32.7
SEM w/ AE |80.9 |39.4 |60.4 |87.2 |31.3
SEM w/ DAE |81.5 |38.4 |59.5 |86.6 |38.8
SEM w/ MAE |83.0 |39.5 |60.9 |86.7 |33.3
Firstly, the autoencoders have shown their basic representation abilities. They obtain good results on DHA and Caltech but fail on CCV and NUSWIDE (AE/DAE/MAE w/o SEM). Secondly, contrastive learning is conducive to exploring useful mutual information among multiple views. It obtains improvements on CCV and NUSWIDE but fails on DHA and Caltech (SEM w/o AEs). Thirdly, our framework leverages self-weighted contrastive learning while using reconstruction objectives to avoid losing discriminative information. Therefore, SEM with AE/DAE/MAE can obtain significant performance improvements compared with the above results.
---
Rebuttal Comment 1.1:
Comment: Thanks for providing additional details on the methodology and conducting ablation studies on the autoencoders. Based on the clarification and additional information provided, I will maintain my current rating of acceptance for the paper. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In summary, this paper investigates an important question about how to mitigate representation degradation in multi-view contrastive learning. Considering the quality difference of views and losing useful information during nets, the proposed method uses adaptive weighted contrastive learning and adds information reconstruction to improve the performance of contrastive learning in multi-view learning.
Strengths: I) The inductive bias of contrastive learning may allow the newly learned representation to capture trivial information, thus causing representation degradation. So, it is meaningful to propose an effective framework to alleviate this issue in this paper.
II) The authors proposed a novel multi-view contrastive learning framework called SEM with multiple implementations. For example, the paper provides different weighting strategies with different advantages and experiments show their effectiveness.
III) Comparison experiments on different datasets show that SEM+infoNCE,RINCE,PSCL has a significant improvement over infoNCE,RINCE,PSCL themself, indicating the effectiveness.
IV) The supplementary material is well organized, including detailed appendices as well as code.
Weaknesses: I) Due to the high complexity of MMD computation, it seems difficult to obtain MMD results on YoutubeVideo dataset (over 100,000 samples).
II) Introducing weighting strategies may increase the number of hyper-parameters. It might be better to add more descriptions, e.g., MMD.
III) Some grammatical mistakes need to be corrected, e.g., transfers is transfer in Line 155; adaptively weighting is adaptively weight in Line 304.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please see weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response To Reviewer gird:
>Q1: Due to the high complexity of MMD computation, it seems difficult to obtain MMD results on YoutubeVideo dataset (over 100,000 samples).
Yes, it is indeed difficult to obtain weights of the MMD weighting strategy on YoutubeVideo. Therefore, for reducing the computation complexity of MMD, we leveraged partial instead of whole samples when applying the MMD weighting strategy. We already discussed this in complexity analysis in Appendix A (Page 7). Furthermore, Appendix B (Page 7) provides the implementation details when applying the MMD weighting strategy. That is, the weights of the MMD weighting strategy are computed by only leveraging the first 2,000 samples on YoutubeVideo, and thus the computation complexity of our method is controllable in practice uses.
>Q2: Introducing weighting strategies may increase the number of hyper-parameters. It might be better to add more descriptions, e.g., MMD.
In our experiments, for the CMI weighting strategy, the cluster number is pre-defined to the truth class number of a dataset. For the MMD weighting strategy, the bandwith and number of kernels are set to 4 on all datasets used in this paper.
>Q3: Some grammatical mistakes need to be corrected, e.g., transfers is transfer in Line 155; adaptively weighting is adaptively weight in Line 304.
Thanks for these comments. We will correct the grammatical mistakes and further polish our draft in the final version. | null | null | null | null | null | null |
Stochastic Approximation Algorithms for Systems of Interacting Particles | Accept (poster) | Summary: This paper analyses discretisations of mean-field type SDEs arising in several areas of machine learning. The main contribution is a convergence result (Theorem 1) stating that under appropriate conditions on the drift and diffusion coefficients, the discretised dynamics convergence in 2-Wasserstein distance, in an infinite time horizon limit, to the continuous-time system of interacting particles. Using the classical uniform propagation of chaos for interacting particle systems to show convergence to the mean field dynamics. Applications to Two-Layer NNs training by SGD, Stein Variational Gradient Descent, Two-Player Zero-sum Continuous Games and Kinetic Equations are presented.
Strengths: As outlined by the authors, SDEs of mean-field type arise in several areas of machine learning and statistics as continuous-time and infinite-number-of-particle limits of discrete stochastic difference equations. Therefore, a convergence analysis of the discrete schemes to the their continuous-time counterparts is of high relevance and importance within the modern machine learning landscape. The paper is well-written and clear, its structure is easy to follow. The four examples on Two-Layer NNs, Stein Variational Gradient Descent, Two-Player Zero-sum Continuous Games and on Kinetic Equations nicely demonstrate the applicability of the main result (Theorem 1) to several topics/areas of modern machine learning.
Weaknesses: There is a rich body of literature on SDE discretisation scheme for McKean-Vlasov SDEs and interacting particle systems [1, 2, 3], that is completely ignored by the authors. The results of these papers concern convergence of Euler-Marayama and/or Milstein type numerical schemes to the limiting mean-field equations when the step size of the solver goes to zero and the number of particles goes to infinity. As far as I see, the main difference in the anlyses is in the notion of convergence in time: the authors consider as convergence criterion the *Wasserstein asymptotic pseudotrajectory* (WAPT), which is a large time behaviour from dynamical systems theory, while in the aforementioned series of paper the convergence is in terms of discretisation step, which is more classical in (numerical) stochastic analysis. Albeit the two notions of convergences are different, I think an in-depth discussion and comparison between the two is required.
I invite the authors to initiate a conversation on the above during the rebuttal period. If a rigorous and fair comparison/discussion is eventually presented, I will happily increase my rating.
**References**
[1] Bao, Jianhai, et al. "First-order convergence of Milstein schemes for McKean–Vlasov equations and interacting particle systems." Proceedings of the Royal Society A 477.2245 (2021): 20200258.
[2] Reisinger, Christoph, and Wolfgang Stockinger. "An adaptive Euler–Maruyama scheme for McKean–Vlasov SDEs with super-linear growth and application to the mean-field FitzHugh–Nagumo model." Journal of Computational and Applied Mathematics 400 (2022): 113725.
[3] Leobacher, Gunther, Christoph Reisinger, and Wolfgang Stockinger. "Well-posedness and numerical schemes for one-dimensional McKean–Vlasov equations and interacting particle systems with discontinuous drift." BIT Numerical Mathematics 62.4 (2022): 1505-1549.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can the authors comment on the choice of 2-Wasserstein distance in the WAPT convergence criterion?
- The authors make repetitive use of the notion of a *limit set*; albeit this is classical in the theory of dynamical systems, it is worth defining it in the main body of the paper.
- The paper would benefit from the addition of an *idea-of-proof* paragraph following Theorem 1 (still defering the details to the appendix).
- (This is not necessarily a question targeted to this paper, but more generally to the community working at the intersection between NNs and mean-field dynamics) What are the issues in analysing more-than-two layers feedforward NNs using the language of mean-field SDEs? What about more general architectures such ResNets, RNNs etc. ?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors adequately addressed limitations of their contribution and discussed future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable and insightful review, especially for pointing out the missing references. We will incorporate Reviewer's recommendation in the revision accordingly.
> There is a rich body of literature on SDE discretisation scheme for McKean-Vlasov SDEs and interacting particle systems [1, 2, 3], that is completely ignored by the authors. The results of these papers concern convergence of Euler-Marayama and/or Milstein type numerical schemes to the limiting mean-field equations when the step size of the solver goes to zero and the number of particles goes to infinity. As far as I see, the main difference in the anlyses is in the notion of convergence in time: the authors consider as convergence criterion the Wasserstein asymptotic pseudotrajectory (WAPT), which is a large time behaviour from dynamical systems theory, while in the aforementioned series of paper the convergence is in terms of discretisation step, which is more classical in (numerical) stochastic analysis. Albeit the two notions of convergences are different, I think an in-depth discussion and comparison between the two is required.
We sincerely appreciate the Reviewer for bringing these references to our attention, which we had indeed overlooked. It is important to clarify several significant differences between our work and the references [1, 2, 3]:
- **Generic Stochastic and Biased Drift Oracles**: Our primary focus lies in generic stochastic and biased drift oracles denoted by $b(\cdot,\cdot)$, while [1, 2, 3] concentrate on deterministic and unbiased drift oracles. Consequently, our algorithmic framework is substantially more general than theirs.
- **Asymptotic vs Finite-Time Bounds**: On the other hand, our convergence results provide asymptotic guarantees, whereas the results in [1, 2, 3] offer much stronger bounds, explicitly controlling the $\mathcal{W}_2$ error in finite time.
- **Incomparable Assumptions**: Apart from the aforementioned differences, our work and [1, 2, 3] rely on incomparable assumptions. For instance, we impose global Lipschitz drifts, whereas [1, 2, 3] can handle more general drifts with only one-sided Lipschitzness. On the other hand, our milder growth condition in Assumption 2 requires control on average, while the stronger pointwise controls are assumed in [1, 2, 3].
In light of these distinctions, we believe that our paper complements the references pointed out by the Reviewer. Moreover, these works raise an intriguing research question: Can the Milstein schemes be adapted as stochastic approximation schemes within our framework, potentially leading to stochastic versions of Milstein schemes that enhance computational efficiency? We look forward to exploring this avenue for future research.
> Can the authors comment on the choice of 2-Wasserstein distance in the WAPT convergence criterion?
There are two major reasons for adopting the Wasserstein distances in our framework:
- It is a popular metric in the propagation of chaos literature, which our framework relies on. Adopting the Wasserstein metrics therefore allows for a seamless transition from stochastic approximation schemes (finite step-size + finite particles) to its mean-field continuous-time limits (infinitesimal step-size + infinite particles) by combining our theory and the propagation of chaos results.
- It is important that 2-Wasserstein space is a **metric space** on which McKean–Vlasov equations can be seen as a **flow**, both aspects indispensable for invoking the dynamical system theory of Benaïm and Hirsch.
> The authors make repetitive use of the notion of a limit set; albeit this is classical in the theory of dynamical systems, it is worth defining it in the main body of the paper.
We agree on this point and we intend to include relevant definitions in the revision.
> The paper would benefit from the addition of an idea-of-proof paragraph following Theorem 1 (still defering the details to the appendix).
We agree with the Reviewer. The main issue preventing us from this was the page limit, which can be easily addressed given an extra page on the camera-ready version.
> (This is not necessarily a question targeted to this paper, but more generally to the community working at the intersection between NNs and mean-field dynamics) What are the issues in analysing more-than-two layers feedforward NNs using the language of mean-field SDEs? What about more general architectures such ResNets, RNNs etc.?
It is possible to extend our framework beyond two layers: The mean-field limit for multilayer or structured neural networks is a research area with several notable contributions. In particular, [NP], [SS], and [AOY] have explored the mean-field limit for deep networks, while [F] has addressed this topic in the context of ResNets.
The rationale behind our selection of a 2-layer neural network lies in the pursuit of clarity in representation. By focusing on this simpler architecture, we aim to provide a more straightforward and accessible presentation of our work.
---
We hope that the above addresses your questions - but please let us know if any of the above is not sufficiently clear.
Thank you again for your input and positive evaluation,
The authors
**References:**
[NP] A Rigorous Framework for the Mean Field Limit of Multilayer Neural Networks by Phan-Minh Nguyen, Huy Tuan Pham.
[SS] Mean Field Analysis of Deep Neural Networks by Justin Sirignano and Konstantinos Spiliopoulos.
[AOY] A mean-field limit for certain deep neural networks by Dyego Araújo, Roberto I. Oliveira, and Daniel Yukimura.
[F] Modeling from Features: a Mean-field Framework for Over-parameterized Deep Neural Networks by Fang et al.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses which have addressed my concernes. I do not have any further questions. I keep my rating unchanged, and I'm considering raising it to 7. I will make a final decision after consultation with other reviewers and AC.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: We extend our gratitude to the Reviewer for pointing out the missing references and for the thoughtful consideration of a potential score increase. We will integrate these discussions into our forthcoming revision. | Summary: This paper develops a theoretical mathematical framework to characterize the convergence properties of discrete particle systems to their mean-field limit.
Strengths: The mathematical theory in this paper is beyond my scope, but it appears to be mathematically sound. The paper is well-written.
Weaknesses: I believe that this paper would benefit from some applied tests/results to show practical relevance. For example, how would it help training a GAN? Do the theoretical convergence results help actual training? How do the results compare to actual training? Are the bounds tight relative to actual convergence?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I think it would be helpful to provide some examples of how these these theoretical results can help inform and guide ML development, etc. For example, how can I use these convergence guarantees as a design a NN?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Limitations do not seem to be explicitly addressed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for raising issue of practical relevance. We have taken this concern seriously and made the necessary adjustments to address it in the rebuttal below, which will be incorporated into the revision.
Having addressed the concerns and made the appropriate changes, we sincerely hope for a re-evaluation of our work based on the constructive discussions we have engaged in. We are open and eager for any further discussions or feedback that can contribute to the improvement and practical applicability of our submission.
> I believe that this paper would benefit from some applied tests/results to show practical relevance. For example, how would it help training a GAN? Do the theoretical convergence results help actual training? How do the results compare to actual training? Are the bounds tight relative to actual convergence?
>Questions:
I think it would be helpful to provide some examples of how these these theoretical results can help inform and guide ML development, etc. For example, how can I use these convergence guarantees as a design a NN?
We address these concerns together. The strength of our frameworks lies not in the design of neural networks but rather in their **algorithmic flexibility**. We provide two justifications to support this claim:
1. **Providing rigorous guarantees for existing interacting particle systems:** In the machine learning community, exploiting stochastic gradients is a common practice for training large-scale neural networks, even when the algorithm's motivation and analysis are based on *deterministic* gradients. This is evident in methods like SVGD, where literature on stochastic gradients is scarce (cf. Section 4.2). In this context, our framework establishes rigorous convergence guarantees for the important stochastic SVGD methods under the mild assumption that the noise has a finite variance, ensuring the applicability of these popular schemes.
2. **Enabling algorithmic design:** We present a pertinent example in multi-agent learning that is not included in the current version of our submission. The (GDA$\_k$) and (OGDA$\_k$) schemes in our paper rely on *simultaneous* updates, i.e., $(x_k, y_k) \rightarrow (x_{k+1}, y_{k+1})$, as dictated by existing theory. However, empirical evidence suggests that *alternating* updates $(x_k, y_k) \rightarrow (x_{k}, y_{k+1}) \rightarrow (x_{k+1}, y_{k+1})$ often performs better. Our framework allows for this flexibility, as it is easy to cast alternating (GDA$\_k$) and (OGDA$\_k$) stochastic approximation schemes satisfying **Assumption 4**, and thus, convergence is guaranteed according to our theory.
In a broader context, our algorithmic template facilitates flexible design by merely verifying a few straightforward assumptions. This empowers researchers and practitioners to explore and develop novel algorithms that suit specific requirements and scenarios, such as in the game-theoretic settings mentioned earlier
> Limitations do not seem to be explicitly addressed by the authors.
We will incorporate the above discussions into our forthcoming revision and make sure to highlight the limitations of our framework.
---
We hope that the above addresses your questions - but please let us know if any of the above is not sufficiently clear.
Thank you again for your input and constructive criticism,
The authors
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. I will raise my rating to a 6.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: We appreciate your initiative in highlighting the practicality concerns within our theory. Furthermore, we extend our gratitude for your favorable re-assessment.
We kindly ask the Reviewer's attention to the pending score upgrade that was promised. Your assistance in fulfilling this commitment would be greatly appreciated. | Summary: This work considers the convergence of discrete time interacting particle systems to their respective continuous time limits (i.e, McKean-Vlasov type equations) under general assumptions which are applicable to varied contexts like neural networks, kinetic theory, game theory and sampling algorithms. The finite particle + finite step size algorithms are considered to be stochastic approximations of the mean field limit and convergence is analyzed in terms of dynamical systems theory. This work considers the notion of weak asymptotic pseudo-trajectory to show that the stochastic approximations are close to mean field limit under general conditions. These are then applied to various specific contexts like neural networks and SVGD to derive convergence bounds.
Strengths: The generality of the assumptions and the framework is the main contribution of this work. This enables the authors to derive several useful results in varied domains under a common framework. This can be a useful tool in establishing convergence to mean field limits in new problems without requiring elementary analysis.
Weaknesses: 1. The notation in the algorithmic template is extremely confusing. Shortening $b(x,\mu)$ to just $b(x)$ makes it very confusing. Population level SA is also a bad terminology since it confuses the reader about whether this is the mean-field limit (i.e, $n \to \infty$) or not (because the mean field limit is often referred to as the population limit).
2. WAPT as the notion of convergence requires more justification. This is so since the continuous process begins at $X_t$, the $t$-th time instant of the discrete time process and the uniform convergence is established as $t \to \infty$. What if the initial deviation in the stochastic approximation ensures that $X_t$ itself is not likely to be reached by PSDE ?
3. Assumption 5 is a bit non-standard. Also, I think there is a typo here. Assumption (11) is not satisfied for any decreasing step size sequence since $\gamma_{k+1}/\gamma_{k+2} > 1$. Please clarify and state what exact step sizes are allowed.
4. Under specific settings, much stronger results can be derived for convergence when the algorithm is designed specially or under specific assumptions like logarithmic sobolev inequalities (even with finite particles and constant step sizes). This framework precludes such analyses. (See [A1,A2]). I am not very well versed in the game theory or kinetic theory literature, so I will abstain from commenting on these results.
[A1] Convergence of mean-field Langevin dynamics: Time and space discretization, stochastic gradient, and variance reduction.
[A2] Provably Fast Finite Particle Variants of SVGD via Virtual Particle Stochastic Approximation
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please answer the questions posed in the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have a good discussion on the applicability of their work. I think they should discuss the drawbacks of WAPT better.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable input and remarks. We are dedicated to addressing your concerns through revisions in our upcoming review. Having taken all your feedback into account, we kindly ask for your consideration in potentially revising the score.
> The notation in the algorithmic template is extremely confusing. Shortening b(x,mu) to just b(x) makes it very confusing. Population level SA is also a bad terminology since it confuses the reader about whether this is the mean-field limit (i.e, ) or not (because the mean field limit is often referred to as the population limit).
We thank the reviewer for this comment. We can use "aggregate drift/diffusion" instead of "population level drift/diffusion" and also make the notation for the aggregated drift and diffusion boldface to avoid further confusion.
> WAPT as the notion of convergence requires more justification. This is so since the continuous process begins at , the -th time instant of the discrete time process and the uniform convergence is established as . What if the initial deviation in the stochastic approximation ensures that itself is not likely to be reached by PSDE ? I think they should discuss the drawbacks of WAPT better.
We thank the Reviewer for bringing forth this question, leading us to recognize that our presentation can be substantially improved by clarifying this misunderstanding: WAPT is **not** intended to serve as a notion of convergence itself. Rather, we **prove** that popular schemes in practice **are** WAPTs, which **implies** convergence to the desirable measures in the standard 2-Wasserstein metric.
To conclude, the notion of WAPT is an important intermediate step in our analysis framework, not the final convergence result. This nuanced perspective will be highlighted in our forthcoming revision.
> Assumption 5 is a bit non-standard. Also, I think there is a typo here. Assumption (11) is not satisfied for any decreasing step size sequence since... Please clarify and state what exact step sizes are allowed.
Thanks a lot for pointing out this typo: Equation (11) should read $\gamma_{k+1}/\gamma_k + P \gamma_k\gamma_{k+1} < 1 - \gamma_k$. We will fix this in the final version.
We also remark that this assumption is not restrictive. For example, one can show that it is met by step-sizes as slow as $1 / (\sqrt{k} \log k)$.
> Under specific settings, much stronger results can be derived for convergence when the algorithm is designed specially or under specific assumptions like logarithmic sobolev inequalities (even with finite particles and constant step sizes). This framework precludes such analyses. (See [A1,A2]). I am not very well versed in the game theory or kinetic theory literature, so I will abstain from commenting on these results.
[A1] Convergence of mean-field Langevin dynamics: Time and space discretization, stochastic gradient, and variance reduction.
[A2] Provably Fast Finite Particle Variants of SVGD via Virtual Particle Stochastic Approximation
Thank you for your insightful comments and references. While the Reviewer has rightfully pointed out that much stronger results can be derived under additional assumptions, we argue that our analysis *complements* instead of "precluding" stronger assumptions such as LSI: In scenarios where establishing LSIs proves challenging, such as in multi-agent systems or SVGD, our theory demonstrates that existing schemes still converge under remarkably mild conditions.
Additionally, we highlight that when strong assumptions like LSIs are present, it is possible to enhance the notion of asymptotic pseudo-trajectories to its non-asymptotic counterpart, known as the $\lambda$-pseudo-trajectories. However, as our primary focus lies in the generic setting where such assumptions are unavailable, we have chosen to defer these studies to future work.
---
We hope that the above addresses your questions - but please let us know if any of the above is not sufficiently clear.
Thank you again for your input and positive evaluation,
The authors
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thanks for the response. I am satisfied with the rebuttal and raise my score to a 6.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thank you for bringing several issues in our presentation to our attention. We also value your updated assessment. | Summary: This paper fills a theoretical gap between the application of ideas interacting particle systems to algorithms in machine learning--algorithms, like SGVD, that are almost always realized as discrete-time routines with a finite number of particles--and the substantial existing body of theoretical work on finite particle systems with continuous dynamics. These latter works have yielded valuable insights about, e.g., the training process of two-layer neural networks, algorithm design for approximate Bayesian inference, or the nature of equilibria in games. However, they have not rigorously established the convergence of discrete-time to continuous. This paper establishes that convergence via Benaïm and Hirsch's notion of a Wasserstein asymptotic pseudo-trajectory (WAPT), which gives a measure of asymptotic closeness (in the Wasserstein-2 sense) between two stochastic processes. Specifically, via Theorem 1, the convergence of the family of (discrete-time) stochastic approximation algorithms (SAA) is reduced to its continuous time counterpart. Combining Theorem 1 with existing results in the literature yields the conclusion that the empirical distribution of particles following the discrete-time SAA converges to the mean-field solution, as desired.
Strengths: The central originality of this paper lies in its adaptation of WAPT to solve an open problem in the mean-field theory of discrete-time IPS in machine learning.
Broadly, I found the clarity and elegance of the mathematical exposition to be exceptionally good. The paper persuasively argues for the importance of a rigorous theory of convergence, and smoothly introduces concepts and definitions needed for understanding Theorem 1, while re-orienting the reader by summarizing previous results at effective moments. After stating the Theorem, the applications of the theory to two-layer NNs, SGVD, games, and kinetic equations were clear and enlightening.
The significance of the presented comprehensive framework is high, as the future directions section makes clear.
Weaknesses: I found the paper without major weaknesses. In terms of the overall presentation of results, I was surprised to see interacting particle systems (IPS) as the frame for this theory rather than simply continuous-time Markov jump processes. I can see the value in specializing to interacting particle systems, but some readers may be deeply acquainted with continuous Markov jump processes and be largely unaware of the IPS literature. Bringing that connection onto the screen by mentioning it in the technical background may help orient readers with a more general stochastic process background.
In the same vein, a brief mention of the relationship to multi-agent systems could be of value in ensuring that this work reaches the wide readership for which its theory is relevant.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - For readers unfamiliar with pseudo-trajectories from the dynamical systems literature, a brief footnote or aside giving a terse formal definition may be helpful. The intuitive explanation is mostly clear--the requirement than the orbit X(t) closely tracks the flow over arbitrarily long time intervals T with arbitrary precision--but a formal statement that uses a constant like \lambda (as I see in line 352!) would likely not strain the reader too much.
- It may be valuable, in the future directions section or in an appendix, to discuss what kinds of algorithms do not fall under the algorithmic template of the SAA family, as a way to highlight other gaps in the existing mean-field theory.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have adequately address the limitations of their work. I see no potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your input and remarks. We reply to your questions below, and we will revise our manuscript accordingly in the upcoming revision.
> In terms of the overall presentation of results, I was surprised to see interacting particle systems (IPS) as the frame for this theory rather than simply continuous-time Markov jump processes. I can see the value in specializing to interacting particle systems, but some readers may be deeply acquainted with continuous Markov jump processes and be largely unaware of the IPS literature. Bringing that connection onto the screen by mentioning it in the technical background may help orient readers with a more general stochastic process background. In the same vein, a brief mention of the relationship to multi-agent systems could be of value in ensuring that this work reaches the wide readership for which its theory is relevant.
We thank the reviewer for bringing this up. Indeed, we have altogether omitted the discussion related to Markov jump processes via passing to the mean-field limit $N\rightarrow\infty$. A similar remark holds for the multi-agent perspective: The mean-field game perspective allows us to bypass it analytically, but we agree that expanding upon these perspectives could enhance the clarity and understanding of our work.
> For readers unfamiliar with pseudo-trajectories from the dynamical systems literature, a brief footnote or aside giving a terse formal definition may be helpful. The intuitive explanation is mostly clear--the requirement than the orbit X(t) closely tracks the flow over arbitrarily long time intervals T with arbitrary precision--but a formal statement that uses a constant like $\lambda$ (as I see in line 352!) would likely not strain the reader too much.
We thank the Reviewer for this reminder: We have indeed defined the concept of a WAPT early-on (line 139). The other concept ($\lambda$-pseudotrajectory) is stronger than the usual APT, and gives stronger results such as asymptotic rates, which is left for future work (see lines 351-354).
> It may be valuable, in the future directions section or in an appendix, to discuss what kinds of algorithms do not fall under the algorithmic template of the SAA family, as a way to highlight other gaps in the existing mean-field theory.
A prime illustration of this is the *adaptive* schemes, like Adam, for training wide two-layer neural networks. Yet another very important example that our theory does not have guarantees for is the Ensemble Kalman Sampler, which we mentioned in the conclusion. Even though this algorithm follows the SAA template, one does not know *a priori* if the diffusion coefficient is bounded in Hilbert-Schmidt norm. Proving convergence of these algorithms is challenging (even in continuous-time) and we do not yet know how to deal with such problems.
---
We hope that the above addresses your questions - but please let us know if any of the above is not sufficiently clear.
Thank you again for your input and positive evaluation,
The authors | Rebuttal 1:
Rebuttal: Dear AC, dear reviewers,
We deeply appreciate your time, input, and thoughtful critiques, as well as your positive evaluation. Your contributions have our sincere gratitude, and all your questions are addressed in a separate point-by-point thread below.
A focal concern that has emerged pertains to the practical relevance of our paper. In this context, we wish to emphasize that our work brings forth two pivotal contributions with direct relevance for practitioners:
1. **Ensuring guarantees for popular heuristics:** Numerous widely-employed interacting particle systems, such as several stochastic variants of SVGD, currently lack convergence guarantees. Our paper fills this gap by establishing their rigorous convergence, thereby solidifying the reliability of these approaches.
2. **Devising novel schemes:** Through a simple validation of **Assumption 4**, our framework furnishes a template for inspiring novel schemes. An illustration of this potential is elucidated in the discussion with Reviewer S8Cs, where we delve into a concrete application within game theory.
For a comprehensive exploration of the remaining points, we refer you to the specific threads tailored to each reviewer's concerns. As we proceed to the discussion phase, we eagerly anticipate any further inquiries that may arise.
With the utmost appreciation,
The authors | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
On the Planning Abilities of Large Language Models - A Critical Investigation | Accept (spotlight) | Summary: This paper conducts a systematic study by generating a suite of instances on domains similar to the ones employed in the International Planning Competition and evaluate LLMs in two distinct modes: autonomous and heuristic. The experiments show that LLMs' ability to generate executable plans autonomously is rather limited, while the results in the heuristic mode show more promise.
Strengths: 1. The paper is well-written and easy to understand.
2. The paper provides a detailed investigation of GPT's planning abilities in different domains and presents some interesting findings.
Weaknesses: 1. Some of the conclusions presented in the paper, such as re-planning makes better performance, have already been widely applied in robotics task and motion planning applications, which is not considered novel. The community has developed a range of interesting algorithms to enhance the planning capabilities of LLM, including re-plan and generating feasible plans [1,2,3,4,5]. While the authors ignores these efforts.
2. The author's investigation of GPT's planning abilities in certain domains overlooks the fact that GPT's greatest strength lies in its zero-shot or few-shot capabilities across different domains, without the need for pre-defined action spaces. Additionally, the metrics used to evaluate GPT are not fair to the model itself. At least human evaluation should be introduced for a more comprehensive assessment. More detailed experiments and metrics can be found in [zero-shot].
3. For a survey paper, it is not sufficient to only consider OpenAI's GPT-level models. Open-source models like LLaMA and Vicuna should also be included in the analysis to better understand if there are fundamental differences in planning capabilities across different levels of language models.
4. More demonstrations in the prompts seem to effectively improve planning performance. The author only conducted zero-shot and one-shot experiments, which is insufficient. A ablation experiment to explore the importance of the "how much-shot" factor would be valuable to the community.
5. Some relevant papers are not cited, which are listed in the references.
References:
[1] Inner Monologue: Embodied reasoning through planning with language models
[2] Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents
[3] ReAct: Synergizing Reasoning and Acting in Language Models
[4] Reflexion: Language Agents with Verbal Reinforcement Learning
[5] Text2Motion: From Natural Language Instructions to Feasible Plans
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Answer questions in weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank reviewer Y3FL for their thoughtful feedback. We are glad that the reviewer found our work to be well-written, detailed and interesting. We will incorporate all the reviewer's suggestions such as citing other relevant papers. Below, we provide responses to some of the concerns raised by the reviewer.
> 1. Some of the conclusions presented in the paper, such as re-planning makes better performance, have already been widely applied in robotics task and motion planning applications, which is not considered novel.
We thank the reviewer for bringing up these relevant papers. We will make sure to cite them. Firstly, we would like to differentiate our backprompting technique from the scenarios that works like [3,4] describe. Our backprompting method provides only verification feedback that can be deduced from the original problem specification. The methods in the current works do not focus on providing verification for an end-to-end plan but instead, provide step-by-step environmental feedback that can inform later steps. This distinction is important, particularly in non-ergodic domains where irreversible actions could be executed. Additionally, the current works provide search guidance information as part of their prompts and this information is created by humans which could potentially lead to phenomena like the Clever Hans effect [1].
Further, in examining the interaction resolution for the domains presented in these works, there appear to be some simplifications. In [5], the instructions seem to provide a substantial amount of the high-level plan, which could potentially reduce the LLM to a semantic parser. Similarly, in [2], the block stacking tasks often present scenarios where n-1 blocks are already stacked, only requiring the agent to stack the nth block. We believe that these efforts do not shed light on the plan generation capabilities of the LLMs themselves as both our evaluations in autonomous and heuristic modes do.
[1] Clever hans or neural theory of mind? stress testing social reasoning in large language models. arXiv preprint arXiv:2305.14763.
[2] Inner Monologue: Embodied reasoning through planning with language models
[3] ReAct: Synergizing Reasoning and Acting in Language Models
[4] Reflexion: Language Agents with Verbal Reinforcement Learning
[5] Text2Motion: From Natural Language Instructions to Feasible Plans
> 2. The author's investigation of GPT's planning abilities in certain domains overlooks the fact that GPT's greatest strength lies in its zero-shot or few-shot capabilities across different domains, without the need for pre-defined action spaces. Additionally, the metrics used to evaluate GPT are not fair to the model itself. At least human evaluation should be introduced for a more comprehensive assessment.
As we discuss in Section 2 (lines 110-126), we readily concede that the approximate omniscience of LLMs allow them to retrieve relevant planning knowledge in many cases. The main point of our paper is that doing correct planning requires both having planning knowledge and dealing with the situation specific interaction resolution issues to ensure the correctness of the plan. Our paper shows both that LLMs can’t do the second part, and that we can gainfully leverage LLM’s approximate retrieval capabilities in the context of external planners/verifiers to provide better planning capabilities. In other words, we are saying that LLM’s can be useful even without us erroneously ascribing them planning capabilities they don’t have.
In terms of metrics, since we are talking about plan correctness, and the model is known, it makes sense to consider the categorical correctness of the plan. Human evaluations don’t provide that as humans may themselves be careless verifiers and/or suffer from automation bias (as we discuss in Appendix A.10)
> 3. For a survey paper, it is not sufficient to only consider OpenAI's GPT-level models.
We believe our work is not a survey paper but rather a critical examination of the planning capabilities of state-of-the-art LLMs. GPT-4 currently is the state-of-the-art among the current LLMs in natural language processing tasks. We believe that the results of GPT-4 on our planning tasks could act as an upper bound on the performance of LLMs in planning. Further, the other GPT models provide us with an approximate understanding of the planning capabilities across varying sizes of the model and fine-tuning methods (instruction-based or chat-based). We have also done preliminary experiments on BLOOM (an open-source large language model) and the results indicate bad plan generation capabilities. We have included the results in the PDF attached as part of the global response. We would also like to point out that we plan to release the required resources and code for the community to evaluate other LLMs of interest.
> 4. More demonstrations in the prompts seem to effectively improve planning performance. The author only conducted zero-shot and one-shot experiments, which is insufficient. An ablation experiment to explore the importance of the "how much-shot" factor would be valuable to the community.
As discussed above, our intent is not to dismiss LLM’s relevance for planning tasks, but to point out that they can be useful without us having to bend over backwards to ascribe them autonomous planning capabilities they don’t have. We won’t argue that we can’t “customize” LLM’s either by giving a large number of examples in-context or during fine-tuning–but that only increases the chance that the plan generation becomes approximate retrieval–and doesn’t prove much about the inherent plan generation capabilities of LLMs. On the whole, we believe that we have done a fair evaluation of the autonomous planning capabilities of the LLMs, giving them as much benefit of doubt as possible.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's reply. I hope the author can add the experimental results on other LLMs, including open-source models, to the final version. | Summary: This paper provides a systematic evaluation of the Planning abilities of a class of LLM (GPT series until the latest GPT-4), using standardized planning problems such as those provided in symbolic planning competitions. It analyzes LLM as autonomous planners, but also as heuristic planners providing suggestions to sound planners (heuristic mode). The experiments use a sophisticated prompting mechanism allowing both NL prompting and PDDL prompting. In the heuristic mode, LLM plans can be repaired via a local search planner (LPG), or use a verifier to send feedback to the LLM ('backprompting'). While LLMs score poorly as autonomous planners, they can achieve high scores (70-80%) in heuristic mode on standard benchmarks.
Post-rebuttal comments: the authors have fully answered my questions, in particular providing additional results for another LLM (BLOOM, which appears a very reasonable choice, due to how its ecosystem differs from that of the GPT family). Their commitment to release some of the evaluation resources can only enhance the contribution of this work, which is reflected in my increasing the 'contribution' score in this review. Having also considered other reviewers' comments and the author responses, I remain very positive about this paper and maintain my original score.
Strengths: The paper introduces a comprehensive evaluation method for the Planning abilities of LLM, harnessing the full methodology of traditional Planning in terms of benchmarks and reasoners. The choice of the latter is particularly appropriate to the experiments at hand, since it includes both a local planner, well-suited to Plan repair, and a validator that can send feedback by identifying gaps or flaws in the proposed plan under the 'heuristic mode'. It is fairly impressive to have automated a process previously taking place in an interactive form with a human in the loop, which was vulnerable to the Clever Hans effect, also observed in other forms of LLM reasoning [1].
The prompting mechanism is particularly sophisticated without being over-engineered, as their is a clear rationale for supporting each option and a rather elegant design starting with PDDL domains and branching out to generate NL or PDDL prompts.
In line with some claims that LLM reasoning tend to reproduce human reasoning to some extent, the choice of planning domains known to be solvable by humans is of high interest, although some of the user/human experiments have been moved back to supplementary material.
The paper is highly readable and quite systematic, and has all the elements to become a reference paper on the topic, not least for the results produced and the contrast between autonomous and heuristic modes, the latter avoiding pitfalls of interactivity or CoT limitations.
[1] Shapira, N., Levy, M., Alavi, S.H., Zhou, X., Choi, Y., Goldberg, Y., Sap, M. and Shwartz, V., 2023. Clever hans or neural theory of mind? stress testing social reasoning in large language models. arXiv preprint arXiv:2305.14763.
Weaknesses: The paper is technically sound with very few weaknesses. Perhaps one issue is the limited number of Planning test domains used in the experiments, especially compared to [Silver et al., 2022] (ref [24] in the paper). This might be in relation to the need to explore human solutions to Planning problems as described in the supplementary material, still it could be worth justifying explicitly.
Another potential issue would be concentrating on the GPT family, as LLM may vary in their real-world knowledge depending on their training base.
The paper has similarities with the following preprint: https://arxiv.org/abs/2302.06706
This is not a major issue, either in terms of novelty or in terms of anonymity, since the submission has substantial new material and there is no direct link to the authors of the preprint. Regardless of whether the preprint is from the same authors (or a subset), it would still be appropriate to reference it in the final version of the paper.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Can any of the observations be related to the autoregressive nature of the LLM explored?
What variability in performance would you expect across LLMs (other than GPT versions)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The conclusion section leaves little space to discuss limitations of the approach. Since CoT prompting has also been explored, it could have been interesting to discuss the proposed coupling of LLM to reasoners via PDDL exchange proposed as part of "Faithful CoT" [1].
The paper rightly identifies the potential role of the natural language semantics of predicates or operators' names in LLM's planning abilities, for which it designs various methods of obfuscation. However, further discussions would be interesting for this phenomenon reported in [2] ("semantics of the English terms used in the PDDL problems"), such semantics of PDDL contents having also been proposed as a mechanism for planning model extension and planning repair [3].
[1] Lyu, Q., Havaldar, S., Stein, A., Zhang, L., Rao, D., Wong, E., Apidianaki, M. and Callison-Burch, C., 2023. Faithful chain-of-thought reasoning. arXiv preprint arXiv:2301.13379.
[2] Silver, T., Hariprasad, V., Shuttleworth, R.S., Kumar, N., Lozano-Pérez, T. and Kaelbling, L.P., 2022, November. PDDL planning with pretrained large language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop. - ref [24] of the paper
[3] Porteous, J., Ferreira, J.F., Lindsay, A. and Cavazza, M., 2021. Automated narrative planning model extension. Autonomous Agents and Multi-Agent Systems, 35(2), p.19.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank reviewer GZ9T for their detailed feedback. We are glad that the reviewer found our work to be systematic, comprehensive and likely to be a standard. We will incorporate all the reviewer's suggestions such as referencing other relevant papers and additional justifications. Below, we provide our response to the question raised by the reviewer.
> Can any of the observations be related to the autoregressive nature of the LLM explored? What variability in performance would you expect across LLMs (other than GPT versions)?
Beyond the general "approximate retrieval" capabilities of the LLMs that can be attributed at some level to their auto-regressive n-grams-on-steroids nature, we did not see any other planning-specific insights. The n-gram auto-regressive nature does seem to help in the context of prompts--especially for the back-prompting techniques--in as much as it seems to get LLMs to generate the correct plan with the back-prompt augmented context. There is however no reason to believe that this is anything more than the usual context-sensitive completion capabilities.
Regarding the variability of capabilities across LLMs, we believe that the results of GPT-4 could act as an upper bound on the performance of LLMs in planning as they currently are state-of-the-art in a lot of natural language processing tasks. We have also done preliminary experiments on BLOOM (an open-source large language model) and the results indicate bad plan generation capabilities. We have included the results in the PDF attached as part of the global response. We would also like to point out that we plan to release the required resources and code for the community to evaluate other LLMs of interest.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response, which answered my questions.
It's great to have provided additional data, in particular on BLOOM, which differs sufficiently from GPT to broaden the argument.
At some point, it might be interesting to investigate BLOOM's worst performance, but that is beyond the scope of this paper. | Summary: This work evaluates the planning abilities of LLMs in two distinct settings: (1) As generators of final plans, with or without feedback from a validator, and (2) as generators of seed plans which are then corrected by a standard planner. The evaluations are performed on two commonsense domains for which humans tend to produce high-quality plans: Blocksworld and Logistics. Four LLMs are tested, including GPT-4, which generates more correct plans than the other models. Still, GPT-4 is found to fail on most of the problems, even with the benefit of one-shot prompting and CoT reasoning. But feedback from a validator (VAL) dramatically boosts the solution rate to 82% in BW and 70% in logistics, after just 3-4 feedback loops on average. In the other setting, the standard planner (LPG) produces correct plans in significantly fewer steps when starting with seed plans generated by GPT-4.
Strengths: This is excellent work, carefully detailed, and clearly presented. It avoids the Clever Hans effect that often arises when humans are involved in evaluations.
The different evaluation settings are very well chosen, and the results provide valuable guidance as industries work to understand how these LLMs can best be leveraged.
GPT-4 is thoroughly evaluated on all benchmarks. Inclusion of three other LLMs on many of the evaluations provides additional insight.
Weaknesses: Covering more domains beyond these two would be a useful contribution. But two are sufficient to support the conclusions drawn, and they light the way for others to run similar evaluations on additional planning domains.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Line 97 says that “our approach of specifying the domain as part of the prompt ensures that the generated plans only use the actions in the domain specification.” How does domain specification provide this guarantee? Can't the LLM still hallucinate nonsense?
Line 191 says: “We set the temperature for all models to be 1, thereby making them deterministic.” Is this a typo? The appendix correctly identifies zero as the temperature that produces deterministic behavior.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: No concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer JKYr for their valuable comments. We are glad that the reviewer found our work to be detailed and well-presented. Below, we provide responses to the questions raised by the reviewer.
> Line 97 says that “our approach of specifying the domain as part of the prompt ensures that the generated plans only use the actions in the domain specification.” How does domain specification provide this guarantee? Can't the LLM still hallucinate nonsense?
We agree with the reviewer that even after specifying the actions in the domain, there is a possibility that LLMs could hallucinate the actions in the generated plan. However, in our experiments, we found that none of the LLMs hallucinates actions for any of the instances. We will update the paper to make this clearer.
> Line 191 says: “We set the temperature for all models to be 1, thereby making them deterministic.” Is this a typo? The appendix correctly identifies zero as the temperature that produces deterministic behavior.
Yes. It is a typo. We will update the paper and fix it.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the clarifications! | Summary: The paper investigates the (lack of) capabilities of pretrained LLM for solving classical well-known planning benchmarks. They study both the case of fully autonomous LLM without any external feedback and the case of using external tools, for validation feedback or as a seed for improving an external planner. As no fine-tuning is done, the work involves variations of prompts for the task considered. Fig 2 is an excellent summary of the studied tasks including zero and one-shot tasks with the problem in NL or in the original PDDL. GPT-4 is reported as the most powerful LLM, but it is still not satisfactory. Moreover, it's shown to be sensitive to the name description, performing better when using the standard names for well-known benchmarks.
Strengths:
- Some LLMs such as GPT-4 are being used to obtain plans, so it's important to investigate their capabilities.
- Classical planning benchmarks are well-understood so offer a solid ground for evaluation.
- Consider explicitly the case of autonomous mode vs external but automatic feedback.
- Chain of thought is investigated, answering a question that people familiar with LLMs might have.
Weaknesses: - LLMs are trained in language and human-written code. The evaluation with Randomized Disguising might be less meaningful.
- However, Randomized Disguising is a small part of the work.
- An alternative not explored in the paper is to add human-readable description to the domains. Even though that requires human intervention, it's reasonable to assume the ones providing the domain can also provide a description.
- The domain-specific translator is not discussed.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Did you try COT with PDDL prompts?
- Are there any indications of how LLM would perform in problems with shallow plans? If the number of objects or actions is big enough, that might be challenging for classical planners.
- For the interactive scenario, does it make sense to keep the temperature at 0? Perhaps randomization might help the LLM to recover to deviate from earlier commitments.
- Did you attempt relaxations with other planning problems? Perhaps it's not "natural" in blocks world, but there are other problems where the plans are equivalent to their relaxed versions.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: A potential limitation of this work is that it might inform about the relevant problems that required planning from LLMs. That's not the scope of the work, so that argument should be left aside. Instead, this work is a systematic investigation of well-known planning benchmarks. Those problems might be close to the distribution of the LLMs.
I miss a discussion on the complexity of the planning tasks per se. There are some simple algorithms for solving block world problems. While logistics can be a complex problem, in the PDDL benchmark solving a single problem is bounded by the complexity of moving one package, a simple problem that as cities are fully connected and it takes one airplane trip for a package to the right city. The relaxed plans discussed in the paper might be related to my question about shallow plans.
Other comments:
- Fig 2 summarizes well the approaches studied, but that's sometimes not mentioned in the table captions. For instance, Table 1 and 2 should mention that they consider the automated approach.
- What's I-GPT-3 in Table 3?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank reviewer NTC1 for their valuable comments. We are glad that the reviewer found our work to be important and comprehensive. Below, we provide responses to the concerns raised by the reviewer.
> 1. Are there any indications of how LLM would perform in problems with shallow plans? If the number of objects or actions is big enough, that might be challenging for classical planners.
We considered this possibility too but found that LLMs don’t necessarily perform well in problems with shallow plans. We would like to point to the graphs in the pdf attached above as part of the global response. These graphs represent the distribution of the correct plans by GPT-4 over optimal plan lengths. From these graphs, we can say that our traditional notions of planning complexity do not hold with LLMs. For an LLM, an easier instance from the perspective of planning complexity is the same as a harder one as it just predicts the next tokens based on their weights and the context. We will update the Appendix with these discussions.
> 2. Did you attempt relaxations with other planning problems? Perhaps it's not "natural" in blocks world, but there are other problems where the plans are equivalent to their relaxed versions.
We have included the relaxation evaluation for other planning domains as well (Logistics and Mystery Blocksworld) in Appendix A.2.1.
> 3. Did you try COT with PDDL prompts?
We haven’t looked into chain of thought experiments with PDDL prompts, but we don’t expect the results to be any better.
> 4. For the interactive scenario, does it make sense to keep the temperature at 0? Perhaps randomization might help the LLM to recover to deviate from earlier commitments.
We had kept the temperature to be 0 primarily for the reproducibility of the results. We believe that it is an interesting additional investigation to play around with the temperature and check for performance improvement in the back-prompting method.
> 5. What's I-GPT-3 in Table 3?
I-GPT3 refers to the Instruct GPT3 model.
> 6. The domain-specific translator is not discussed.
For each domain, we perform template-based translation to translate from PDDL to natural language for the natural language prompt configurations. We will include this information in the paper as well.
> 7. LLMs are trained in language and human-written code. The evaluation with Randomized Disguising might be less meaningful.
- However, Randomized Disguising is a small part of the work.
- An alternative not explored in the paper is to add human-readable description to the domains. Even though that requires human intervention, it's reasonable to assume the ones providing the domain can also provide a description.
Our point in that domain obfuscation section was to show that LLM’s don’t seem to possess plan generation abilities that can’t be explained by their approximate retrieval abilities. To the extent LLM’s significantly worsen in plan generation when predicate names are changed either to other meaning-bearing words or random words, we believe it lends credence to our hypothesis.
---
Rebuttal Comment 1.1:
Title: thank you
Comment: Thank you for your responses.
I agree that Randomized Disguising is not the same. That'd be a different experiment. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful feedback. We have provided our responses separately for each reviewer. We have attached a PDF containing the images and tables which we refer to in the individual responses.
Pdf: /pdf/7840395a675841768dfab06a742c8009695aa510.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Explain Any Concept: Segment Anything Meets Concept-Based Explanation | Accept (poster) | Summary: This paper proposed EAC, which lines up SAM with XAI. The technique includes there phases: (1) generates concepts with SAM; (2) trains a surrogate model to represent the target model, using the same FC layer; (3) regards results of SAM in the first phases as players and calculates Shapley values with the surrogate model. Finally, obtain the masked image based on Shapley values as the visual explanation.
Strengths: 1. The first one to use SAM as a concept discovery method in the concept-based XAI.
2. Good performance in the quantitative evaluation of faithfulness and user study.
3. The paper is well-organized and easy to follow.
Weaknesses: 1. Better to give some examples to show the trade-off. It’s hard to understand the trade-off mentioned in Sec.3.1. The highly faithful visual explanation map can show the importance of image pixels for the model to make a prediction accurately, so we use deletion/insertion to measure faithfulness. This is not conflict with human’s understanding. For example, through the explanation map from Grad-CAM or RISE, human can get where the model was looking at when did the prediction, so then human can make analysis of the model’s error mode.
And in [A, B], the user studies show that the faithfulness of the visual XAI map is consistent with human’s confidence (higher faithfulness map gets a higher rank in the user study), demonstrating that faithfulness and understandability are not conflict.
[A] Chenyang Z, Chan A B. ODAM: Gradient-based Instance-Specific Visual Explanations for Object Detection[C]//The Eleventh International Conference on Learning Representations. 2023.
[B] Petsiuk V, Jain R, Manjunatha V, et al. Black-box explanation of object detectors via saliency maps[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 11443-11452.
2. My major concern is the significance of the work, although the paper gives good quantitative evaluation results and discussions about the potential usage of EAC.
- I’m not sure what the explanations are supposed to explain. From the samples in Figure 2, the explanation seems to be a correct segmentation of the salient object on the image, which corresponds to the classification result. The method is also tested on COCO, and I’m curious about the explanation in the situation of several same-class objects. In the fourth row, there are two zebras, and the explanation only marks out one of them; what does this mean? The model classifies the image as “zebra” because it has seen the right one, but not because of the other one?
- The explanation map is supposed to help developers to understand the predictions and analyze the model. The paper should also provide some cases of applying EAC on wrong-classified samples.
- I know that the baseline LIME also uses a surrogate model, but is that reasonable to use a surrogate model to replace the target model to be explained? How can we ensure that the same input (any coalition S of concepts) can always generate the same output between the two models? So why can use the explanation for the surrogate model to represent the explanation for the target model?
3. Line 87, the reference of GradSHAP is actually for Grad-CAM. Reference for Grad-SHAP is missing.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: see the "Weaknesses"
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: explain the trade-off in sec 3.1**
A1: Sorry for the confusion. Our responses follow:
First, the “trade-off” primarily refers to previous super-pixel based XAI methods (the mainstream), because it’s generally hard to decide an “one-size-fit-all’ superpixel size for previous works.
We present the following figure, showing that superpixel size mainly decides the faithfulness and understandability. The ResNet model predicts the original image as “ballplayer,” and we use EAC with two baselines from the main paper (Superpixel-based IntGrad and cluster pixel-based GradShap) to explain this prediction.
[A figure has been submitted via the "Official Comment" button to the AC]
As shown in this figure, GradShap correctly identifies the player, but the cluster area is too small to be clear. Superpixel-based IntGrad is more readable than GradShap, but it is however imprecise. EAC strikes a balance by producing a well-shaped player that is easy to understand and also focuses on the ballplayer rather than irrelevant objects.
Overall, a too large superpixel size can yield inaccurate yet more “complete” (and thus more readable) outputs, and vice versa for a too small superpixel size. Accordingly, the Figure 1 in our submitted Supplementary Material quantitatively reports that a too large/small superpixel size undermines the faithfulness. On the other hand, our EAC by design avoids this tradeoff problem, given that the size of the concept is intelligently extracted by SAM’s object detection.
Again, thank you for your advice, and we will extend the “trade-off” paragraph to avoid confusion.
> "And in [A, B], the user studies show that the faithfulness of the visual XAI map is consistent with human's confidence (higher faithfulness map gets a higher rank in the user study), demonstrating that faithfulness and understandability are not conflict."
We wish to clarify a confusion here. Those two works empirically justify that their methods have better faithfulness & understandability than previous works. In this regard, it is aligned with our paper, because our evaluation shows that EAC achieves better faithfulness and understandability. Those two papers are however not studying the inherent tradeoff across superpixel-based methods.
**Q2: Explain the “correctness” in Fig 2 of what EAC should deliver to the audiences**
A2: We answer this from three aspects:
Shapley Value by design measures the average expected marginal contribution of a concept after all possible combinations fairly. Namely, Shapley computes the exact importance of whether the model prefers the zebra on the left or on the right. Thus, it is technically correct to flag the right zebra as it contributes more.
In fact, as we can see in Fig.2, for any baseline using Shapley Value (GradShap, KernelShap, and EAC itself), they consistently prefer the zebra on the right.
Compared with other Shapley baselines using super-pixel, output by our EAC method renders a clear and well-formed zebra to the audiences, which enhances human readability a step further.
**Q3: What is our rationality to use a surrogate model replacing the target model? How can we ensure that the same input (any coalition S of concepts) can always generate the same output between the two models? So why can use the explanation for the surrogate model to represent the explanation for the target model?**
A3: Unlike LIME, our aim is to calculate the Shapley Value of each concept’s contribution. Computing the exact Shapley Value is computationally expensive O(2^(N)), so we take a common practice to employ a surrogate model that can estimate it [1,2,3]. To ensure that our surrogate model can produce the same output as the original model, we use Monte Carlo (MC) sampling; [2] has theoretically shown that MC makes the surrogate-provided Shapley Value closer to the true value of the target model, when more samples are made. Accordingly, we use a large number of MC samples (50000 iterations) for each experiment. Our empirical observation shows that this setting offers sufficiently good accuracy. We’ll note this in revision.
[1] Graphsvx: Shapley value explanations for graph neural networks. ECML PKDD 2021.
[2] Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms. VLDB 2019.
[3] Shapley Computations Using Surrogate Model-Based Trees. IISA 2019.
**Q4: Line 87, the reference of GradSHAP is actually for Grad-CAM. Reference for Grad-SHAP is missing.**
A4: Sorry about this and thank you for pointing it out; we will carefully proofread the paper and fix those errors.
---
Rebuttal Comment 1.1:
Comment: Thank the author's detailed reply, which mostly solve my concerns. I would like to raise my score to 5.
ps: I cannot see your submitted figure to AC, maybe can submit an extra PDF for the figures.
---
Reply to Comment 1.1.1:
Title: Thank you and the figure
Comment: Dear Reviewer,
Thanks a lot for reading our rebuttal and raising the score! We appreciate it very much.
Here is the figure:
https://anonymous.4open.science/r/sam-demo-AB3D/rebuttal_demo_for_%2058qS.png
This url works on our end, and kindly let us know if it does not work on your end.
Sincerely, | Summary: This work proposes EAC to study the interpretability of models. Instead of making element-wise explanations, EAC segments an input into sub-parts, then uses Shapley value to characterize important features for a model decision. User studies were conducted to show the explainability ability of this method.
Strengths: In this work, the idea that splits an input into performing sub-parts and subsequence-level explanation are reasonable. The proposed method is also easy to understand. The user studies show the effectiveness of EAC.
Weaknesses: This method is built upon LIME with certain modifications specific to approximate the target model, which compromises the technical novelty of this work. And the selection of parameters is not clear to me, which should influence the final performance. For example, how to choose an optimal surrogate model suitable for datasets across different domains. Besides, the comparison methods in Tables 1 & 2 are vanilla so that they not quite convincing. Maybe more stronger methods need to be considered. In addition, the baseline methods, i.e., LIME and KernelSHAP, used in Tables 1 & 2 are proposed in 2016 and 2017 respectively. More recent methods should be considered as baselines, for example, “On locality of local explanation models, NeurIPS 2021”, “Craft: Concept recursive activation factorization for explainability, CVPR 2023”, “RKHS-SHAP: Shapley values for kernel methods, NeurIPS 2022”. Please address the weakness parts in the rebuttal.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: This paper is well presented with an interesting idea based on Shapley value. Since it’s mainly based on Shapley value technique, this may limit the originality contribution.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discussed broader societal impacts.
Flag For Ethics Review: ['Ethics review needed: Privacy and Security (e.g., consent, surveillance, data storage concern)']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: EAC built upon LIME, compromise technical novelty**
A1: Thanks for the comment. Indeed, we found that the binary feature expression by LIME is very inspiring, such that we adopted those expressions in our pipeline. Nevertheless, there are two main major differences between our work EAC and LIME: 1) Concept-wise SAM+Shapley and 2) Per-Input Equivalence (PIE).
1)Unlike LIME, we choose to use Shapley Value as our importance score measurement. This is because Shapley Value accounts for the average expected marginal contribution of a concept after all possible combinations fairly. In contrast, LIME only uses Super-Pixel for concept discovery, which is often less human-friendly and understandable than SAM extractor, as shown in Fig.2.
2)To reduce the heavy computational burden of Shapley Value, we propose the scheme of Per-Input Equivalence (PIE). Moreover, while LIME only uses a set of linear weights as the surrogate model, ours not only mimics the feature extractor but also retains its original fully-connected layer.
Also, besides the above *technical novelty* comparison, we wish to clarify that our paper has *conceptual-level novelty* (i.e., we for the first time advocate using SAM as a concept discovery method to facilitate concept-based XAI and achieve high faithfulness and understandability) and also *highly encouraging empirical results*. We’ll better clarify our contributions in the revision.
**Q2: unclear selection of hyperparam for EAC?**
A2: Thanks for the question. We clarify that EAC does not require much “hyper-param.” The only hyper-param involved is when fitting the PIE Scheme, i.e. a simple linear neural network learning scheme and the Monte Carlo (MC) sampling: we set `lr=0.008` and the number of MC sampling as `50000` throughout all experiments. We’ll clarify this in the revision, and we also explain why using a MC sampling threshold of `50000` when answering the **Q2** of reviewer 6va6 above.
**Q3: more and stronger baselines**
A3:
| SOTA | Imagenet adding (higher the better) | Imagenet removing (lower the better) | COCO adding(higher the better) | Coco removing(lower the better) |
|-------|-------------------------------------|--------------------------------------|--------------------------------|---------------------------------|
| EAC | 83.40 | 23.799 | 83.404 | 16.640 |
| CRAFT | 60.40 | 54.66 | 51.49 | 44.93 |
Following your suggestion, we launch more experiments to compare with baselines during the rebuttal phase. In particular, we report the performance of Craft (CVPR’23) compared with EAC under the same experiment setting in table 1. For Craft, we report its best performance after carefully tuning its hyper-param: patch_size=32, num_super-pixel=75/58 imagenet/coco. EAC outperforms CRAFT by 20 AUC percentage points in almost all cases.
On the other hand, RKHS-SHAP and Local Explanation are originally designed for numerical data with two dimensions (rows and columns). To adapt them for image tensors with four or three dimensions would take too much time and effort during the rebuttal timing window. We attempted to use the Local Explanation Models pipeline by choosing only the last fully-connected layer of the nn model, which produces a two-dimensional feature vector of size `n x 2048`, where `n` is the number of superpixel concepts. However, this did not work because the Local Explanation Models pipeline needs `n` to be larger than 2048, which is not feasible.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for this extensive and informative rebuttal. I would like to raise my score to 5. | Summary: This article proposes EAC, which aims to use the Segment Anything Model to generate some prior concepts. By constructing a surrogate model, the concept combination area most relevant to the decision category is calculated by Shapley Value. The author evaluates the proposed model from three perspectives: faithfulness, understandability and effectiveness. Experimental results demonstrate the superiority of the proposed method.
Strengths: - SAM-based prior methods will provide more accurate localization on conventional data than superpixel-based methods.
- This approach can explain both CNN and ViT models (in supplementary material).
- The authors provided the code to ensure reproducibility.
Weaknesses: - The authors compare DeepLIFT, GradSHAP, and IntGrad methods, which use superpixel-based methods. Can the authors replace the superpixel method with the input result of Segment Anything to make the method comparison fairer?
- Why is it necessary to train a surrogate model? Is this to improve inference efficiency?
- Since this paper only uses SAM to generate prior knowledge, and then uses Shapley Value to estimate the importance score of prior concepts. I would like to see a concrete comparison with other Shapley Value based methods such as GradShap+SAM or FastShap+SAM.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Are the deletion and insertion indicators used by the author deleted according to the score of shapley value?
- Since the results generated by SAM have no labels, can this segmentation result be called a concept?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed potential social implications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: compare our results with GradShap+SAM or FastShap+SAM.**
A1:
Thank you for this insightful question. We indeed explored this direction before, and our preliminary observation shows that this is unpromising; please see our response to **Q2** below on the conceptual-level clarification of “training a surrogate model.”
Following your suggestion, below we launch more experiments to compare with baselines. Due to the large baselines for evaluation and the short timing window during rebuttal, we focus on evaluating the "Adding" operation: adding and removing operations share conceptually similar difficulty, as reflected in our results in the paper (tools behave consistently under "Adding" and "Removing" operations; as shown in Table 1, tools with better score in adding also preserve approximately the same performance ranking in deleting, such as EAC, LIME, DeepLift).
| SOTA | Imagenet adding | COCO adding |
|--------------|-----------------|-------------|
| KernelSHAP+sam | 81.76 | 75.65 |
| DeepLIFT+sam | 52.82 | 49.27 |
| LIME+sam | 79.85 | 77.50 |
| EAC (Ours) | 83.40 | 83.404 |
| FeatAbl+sam | 72.74 | 71.24 |
| GradSHAP+sam | 44.47 | 41.37 |
From the table above, we interpret that our EAC method consistently achieves the best result and outperforms others. In the COCO dataset, it is even six percentages higher than that of the second best (lime+sam). This empirically justifies the advantage of employing our customized lightweight surrogate model.
We’ll add the above results (with further evaluations on “Removing”) in revision.
**Q2: Why is it necessary to train a surrogate model? Is this to improve inference efficiency?**
A2: First, using a surrogate model appears to be a common practice in this field of research [1,2,3]; for instance, the seminal work, LIME, also uses surrogate models to approximate a neural network’s decision and ease XAI.
Nevertheless, Unlike LIME, the surrogate model in our work mainly serves to efficiently compute the Shapley value of each concept’s contribution (thus to answer your question: yes, it’s mainly for “improving inference efficiency”).
Overall, computing the exact Shapley Value is computationally expensive O(2^(N)), so we take a common practice to employ a surrogate model that can estimate it [1,2,3]. To ensure that our surrogate model can produce the same output as the original model, we use Monte Carlo (MC) sampling; [2] has theoretically shown that MC makes the surrogate-provided Shapley Value closer to the true value of the target model, when more samples are made. Accordingly, we use a large number of MC samples (50000 iterations) for each experiment. Our empirical observation shows that this setting offers sufficiently good accuracy. We’ll note this in revision.
[1] GraphSVX: Shapley Value Explanations for Graph Neural Network. ECML PKDD 2021.
[2] Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms. VLDB 2019.
[3] Shapley Computations Using Surrogate Model-Based Trees. IISA 2019.
---
Rebuttal Comment 1.1:
Title: Another Question
Comment: Thanks, how were the visualized regions in Figure 2 selected? Is it based on a threshold? Or the most important segmented region to choose?
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
We appreciate your response to our rebuttal. All the SOTA methods in Fig2 produce an importance score for each patch in an image, using either Shapley, Gradient, or other approaches. We then choose the concept patch with the highest score for each SOTA method. Sorry for the confusion, We’ll clarify this in revision. | Summary: The paper introduces Explain Any Concept (EAC), a concept-based explanation approach that enhances the interpretability of deep neural networks in computer vision. EAC utilizes the Segment Any Model (SAM) as a concept discovery technique, and the authors propose a lightweight Per-Input Equivalence (PIE) scheme that employs a proxy model to approximate the target model. This scheme reduces computational costs while maintaining reasonable fidelity, effectively addressing the challenges related to fidelity, comprehensibility, and efficiency in Explainable Artificial Intelligence (XAI). The effectiveness of EAC is demonstrated through experiments conducted on the ImageNet and COCO datasets, where it outperforms pixel-level and superpixel-based methods in terms of accuracy and interpretability. The authors emphasize the potential applications of EAC in various domains, including medical image analysis, autonomous driving, and robotics.
Strengths: The EAC is a novel concept-based interpretable method that integrates the recently released SAM. It leverages the zero-shot/few-shot capabilities of the segment anything model, addressing the limitations of pixel-based methods and the need for human annotation in concept-based approaches.
The proposed lightweight per-input equivalent (PIE) scheme improves the efficiency of the explanation process while maintaining high faithfulness and understandability.
The paper conducts extensive experiments and evaluations on popular datasets such as ImageNet and COCO to demonstrate the effectiveness of EAC. Comparisons with traditional pixel-level and superpixel-based XAI methods showcase the superior performance of EAC.
Weaknesses: All datasets used in the paper are natural images in the general domain. However, it is important to include some knowledge-specific domains, such as medical images, for evaluation purposes.
The performance of the proposed methods can be affected by the SAM segmentation performance. It has already been found that SAM struggles to segment medical images or tiny-scale objects.
Some failure cases should be included to provide a comprehensive overview of the model's faithfulness.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: May need proofreading:
All references in Supplementary Material are shown incorrectly.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Lack of real-world deployment evaluations: While this paper discusses the potential impact and applications of EAC in various domains, it does not provide real-world deployment evaluations or case studies to demonstrate the practical effectiveness of EAC in real-world scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: why not include some knowledge-specific domain dataset to eval?**
A1: Thank you for your comments, and we agree with you. Yes, our observation and exploration also show that the Meta SAM model was only trained on the general image domain, and it may struggle to segment images in knowledge-specific domains.
And from a more general perspective, a common challenge for neural models is their poor generalization in domains that require specific knowledge. Besides SAM, even the latest and most advanced Large Language Models (LLMs) like ChatGPT/LLAMA suffer from this issue as well [1,2,3].
Having that said, with the recent development of knowledge-specific area SAM such as medical image [4], remote sensing [5], and UVA [6], we believe EAC has the potential to improve DNN decisions on other targeted areas. We leave it as one future work, as discussed in Section 6. Since this is the first attempt to bring humanly understandable and computationally feasible concept extractors to the field XAI, we believe our work can shed light on their explainability.
In revision, we would like to follow your suggestions to clarify this point (SAM’s limitation) in the paper; this shall better provide a fair and comprehensive overview of the model’s faithfulness without potential overclaims.
[1]: Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond.
[2]: Is ChatGPT a good translator? A preliminary study.
[3]: The ConceptARC Benchmark: Evaluating Understanding and Generalization in the ARC Domain.
[4]; SAM on Medical Images: A Comprehensive Study on Three Prompt Modes.
[5]: The Segment Anything Model (SAM) for Remote Sensing Applications: From Zero to One Shot.
[6]: SAM-DA: UAV Tracks Anything at Night with SAM-Powered Domain Adaptation.
---
Rebuttal Comment 1.1:
Comment: Thank you for your feedback. I'm happy with the clarification, and I will keep the weak acceptance. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Latent SDEs on Homogeneous Spaces | Accept (poster) | Summary: In this paper the authors develop machinery for performing variational inference on latent functions in models where observations are generated by a latent stochastic process. That is, the generative model is that a path in some latent space is generated according to a prior, and then we observe (noisy) values of this path at a subset of the points. In principle this paper deals with a general case where the latent space is a "homogeneous space" which is a manifold such that there is a Lie group such that for any pair of points in the manifold one can translate one of the points to the other using a single element of the Lie group. In practice, the paper focuses on the specific case of the manifold being an n-sphere (so that the Lie group is the rotations, SO(n)), and the generative model is that the initial point is sampled uniformly from the sphere and then evolves according to a (scaled) Brownian motion on the sphere. Once the authors develop their inference machinery, they apply their method to a number of regression, classification, interpolation, and extrapolation problems, showing that this latent SDE formulation can be useful in some settings.
Strengths: The proposed method is very elegant and the mathematics is nice. Moving to SDEs on a compact latent space allows for many nice features like being able to define uninformative priors and having nice correspondences between the initial distribution and the stationary distribution of the SDE.
The simplicity and performance on the considered tasks is strong motivation for the usefulness of the construction, and having access to posteriors over entire latent trajectories allows for a number of interesting tasks like interpolation.
Weaknesses: - The main weakness for me is the presentation of the material. I will try to provide more specific feedback in the following points, but overall there was a large disconnect between the setup in equations (1) and (2) and then the actual applications. Specifically, (2) is listed as the objective function, but then it's not clear what the objective function would be for the regression and classification tasks. The classification task presumably has a different generative model than the one presented between equations (1) and (2), and it's not clear where the class label would go, and what data is used where. What parts of the models are learnable?
- The introduction is a bit confusing. For example, it is unclear while reading the introduction what is path-valued and what is not. Line 105 says everything is path-valued but then lines 93-94 say that we only have a finite number of discrete observations of the path.
- I found the condition on $h$ in line 103 confusing. Is this at all a restriction? It seems like equation (1) automatically makes $X_t | Z_t$ normally distributed with mean $\mu_t = h(Z_t)$ and variance $\mathbf{R}$.
- I defer to the authors on how they want to present it, but I was confused by the presentation of the ELBO around (2) because in general the KL divergence between distributions over functions determined by SDEs will be infinite unless the diffusion terms are the same. The authors address this thoroughly and clearly later in the manuscript, but sweep this (to me) important point under the rug in the introduction.
- Equation (12) and Figure 2 make it clear that the inference network takes in the $\mathbf{x}$'s and then learns the hyperparameters of the initial distribution and the SDE. Is the inference network necessary because of the sample sizes considered here? Wouldn't it be possible to just directly optimize the set of $\mathbf{K}_i^\phi$'s that specify the SDE (and the parameters of the initialization) for each set of observations? Some comment on the distinction of what is being used for amortization (and why) vs. modeling flexibility would be useful.
- The notation around equation (7) is very confusing to me. It appears that the time points $t_k$ represent the time that has passed since time $0$, but then it feels like $Z_{t_j}$ should be $G(t_j)G^{-1}(t_{j-1})Z_{t_{j-1}}$. Perhaps relatedly, it's not obvious to me what the connection is between $G(t_j)$ and the SDE in (4). In particular it feels like to solve the SDE for the interval $[t_{j-1}, t_j]$ you would need a different initial condition (the distribution over $G(t_{j-1})$ obtained by solving the SDE up to that point. Apologies if I'm being slowing here.
- The example of extrapolating the rotated MNIST was a bit confusing. Doesn't the extrapolation rely exclusively on the Chebyshev polynomials behaving well after the end of the interval over which they are trained? Relatedly, how is the end time point chosen during the VI optimization and does it matter? The KL term seems sensitive to how long of an interval is considered. E.g., if we consider data sampled on [0, 1], but then compute the KL on paths from [0, T] with T>>1, we would want the drift terms to eventually relax back to matching the prior.
- Lines 371-372 are confusing: what is meant by saying that only the initial time point is observed? The prior is driftless, so it feels like if one just observes the initial state it should be hard to get directional rotations using the latent SDE model.
Typos:
- "on a various time series interpolation" --> "on various time series interpolation"
- "the paradigm of Parameterizing the vector fields" --> "the paradigm of parameterizing the vector fields"
- Line 171: "in context of" --> "in the context of"
- Line 186: "are show in" --> "are shown in"
- Line 189: "This allows to select" --> "This allows selecting"
- I believe the equation prior to line 192 is missing a square on the Frobenius norm term since the vector norm on the lefthand side is squared
- Line 265: "with label switches need" --> "with label switches needing"
- Line 358: "Upon receiving first" --> "Upon receiving the first"
- I believe equation (35) has an erroneous $\mathbf{z}$ on the lefthand side (the righthand side is matrix valued)
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: The actual generative model is quite simple, and very little of it is learned. In particular, the prior distribution has only a single learnable parameter. Presumably most of the modeling flexibility comes from learning the mapping from the latent space to the observable data (i.e., $p(\mathbf{x}(t) | \mathbf{z}(t))$). Is there a reason for not learning a drift term for the prior? Would one run into identifiability issues with more complex priors?
How difficult would it be to extend the implementation and framework to other homogeneous spaces? Are there any applications where such a generalization would be obviously useful?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: The authors have adequately addressed potential social implications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Below, we address all points and outline how we will revise our manuscript. If there are further questions, we are more than happy to answer.
> ... (2) is listed as the objective function, but then it's not clear what the objective function would be for the regression and classification tasks ... What parts of the models are learnable?
We kindly refer to our **General Response** (♦) to clarify the optimization objective & learnable model components.
> ... it is unclear while reading the introduction what is path-valued and what is not. Line 105 says everything is path-valued but then lines 93-94 say that we only have a finite number of discrete observations of the path.
In the mentioned paragraph, we wanted to express that our setting differs from a typical VAE setting in that the random variables are stochastic processes, i.e., path-valued. To clarify, in `l105`, "observations" refer to realizations $\mathbf x = X(\omega):[0,T]\to \mathbb R^d$ of the process $X$. In practice, not the entire $\mathbf x$ is available, but only a finite tuple $(\mathbf x(t_1), \dots, \mathbf x(t_m))$ of evaluations of $\mathbf x$. We will clarify this in a final version.
> I found the condition on $h$ in line 103 confusing. Is this at all a restriction? It seems like equation (1) automatically makes $X_t|Z_t$
normally distributed with mean $\mu_t = h(Z_t)$ and variance $\boldsymbol{R}$.
Overall, Eq. (1) in our "Preliminaries" section is unnecessary and (apparently) caused confusion; please see our **General Response** (♥).
> I was confused by the presentation of the ELBO around (2) because ...
We agree that even at this (early) point in the manuscript, a remark would help to clarify that when distributions are determined by SDEs, the diffusion terms need to agree to avoid infinite KL divergence. We will add a remark.
> Equation (12) and Figure 2 make it clear that the inference network takes in the $\mathbf x$'s and then learns the hyperparameters of the initial distribution and the SDE. Is the inference network necessary because of the sample sizes considered here? ...
The inference network learns a mapping from observations to the initial state parameterization of the SDE and to the coefficients in $\mathbf K^\phi_i$. In principle, these parameters could be optimized directly and separately for each time series, but this would not scale well: in fact, at test time, each new sequence of observations would require optimizing over the SDE parameterization again. With an *inference network*, one simply needs one forward pass through the model.
> The notation around equation (7) is very confusing to me ...
The reviewer is right that the notation of $G(t_j)$ around Eq. (7) is inconsistent with the SDE on $G_t$ in Eq. (4) and that $G(t_j)$ needs to be replaced with $G_{t_j} G_{t_{j-1}}^{-1}$, i.e., with $\exp(\Omega_{j})$ from Alg. 1 in the suppl. material. In this respect, for $t \in [t_{j-1}, t_j]$, the SDE in Eq. (4) is to be understood as an SDE on $G_t$ with initial value $G_{t_{j-1}}$, or, equivalently, as an SDE on $G_t G_{t_{j-1}}^{-1}$ starting at the identity. We will clarify and update the notation.
> The example of extrapolating the rotated MNIST was a bit confusing. Doesn't the extrapolation rely exclusively on the Chebyshev polynomials ...
For extrapolation, we use the *same* model that was trained on the interpolation task, which has only seen data in the time $[0,1]$ (i.e., the 1st full rotation, w/o knowledge of extrapolation times $t>1$). Accordingly, the path KL div. is only computed on $[0,1]$. At test time, predictions at future time points are generated by integrating over a longer time range. It is true that extrapolation quality primarily depends on how well the drift term behaves for $t>1$. To verify the intuition that a constant velocity on the sphere is a suitable model for the constant rotation velocity in the data, we use only the first $K=1$ Chebyshev polynomial, i.e., *a constant*. $K>1$ yields better results but at a loss of extrapolation quality as the higher-order polynomials are less well-behaved for large $t$.
> Lines 371-372 are confusing: what is meant by saying that only the initial time point is observed? ...
We agree that the phrase "only the initial time point is observed" is misleading: for *loss computation*, we **do** have information about 11 out of 16 images rotated by multiples of 22.5°. However, for training & testing, only the initial upright '3' is available as model **input**. This strategy works as the rotation speed in the data is constant. For rotation speeds that vary across time series, we would likely need more input images.
> ... Is there a reason for not learning a drift term for the prior? Would one run into identifiability issues with more complex priors? ...
We use an uninformative prior with zero drift to express our limited knowledge about the underlying latent dynamics. In case of additional information, learning a drift component for the prior might be beneficial, e.g., learning a prior with constant velocity might be a good idea on Rotated MNIST.
Regarding identifiability, note that we are not interested in interpreting the parameters of the latent SDE (in isolation) but only in its utility for downstream tasks.
Identifiability would be more relevant when a specific parametric SDE is prescribed (e.g., from a physical model) and one is interested in the values of the fitted parameters.
> How difficult would it be to extend the implementation and framework to other homogeneous spaces? Are there any applications where such a generalization would be obviously useful?
For more modeling flexibility, SDEs on $\mathbb R^n$, induced by matrix multiplication with $G\in GL(n)$, can be used. We already implemented this variant, and **R-Tbl. 1** in the attached PDF lists results on Rotated MNIST. Another application (cf. reviewer XRoq) is latent dynamics in a hyperbolic space induced by $O(1,n)$, e.g., for modeling relativistic dynamics.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the thorough response. You have substantially clarified things, and I will increase my score correspondingly, under the assumption that these clarifications will make it into the next version of the paper.
I also think it would be good to include something akin to your response here regarding the extrapolation on MNIST. It feels like this particular example is leveraging information that will not be true in general applications, and so presents a somewhat overly rosy view of how this approach is expected to extrapolate on arbitrary real world applications.
Thank you again. | Summary: This paper deals with the problem of variational inference for sequential data, i.e. time series, using latent SDEs. The idea of this class of methods is to assume the stochastic process that is observed is related to a generative SDE in a latent space, whose parameters need to be inferred from data in a Bayesian fashion. The novelty of this paper is to consider SDE priors that live on homogeneous spaces, and in particular on the sphere (acted upon transitively by matrix multiplication by $SO(n)$). Many works have taken the path of having either a spherical or sequential latent space recently, but have never considered latent SDEs on the sphere as priors for variational inference of the posterior process. This is made possible by designing an uniformative sequential prior process on the sphere (with a tractable KL divergence) and a parametric class of posterior SDEs that provably lead to solutions on the sphere, and using an associated geometric Euler Maruyama scheme for the numerical integration of the SDE. The model is then optimized using an ELBO loss that is adapted to sequential variables and the SDE at hand, using the Power Spherical law as a reparametrization-friendly prior distribution on initial condition, and furthermore fitting the drift of the latent SDE. The method is then tested on a number of datasets and machine learning tasks, for which having both a sequential and spherical prior may or may not be a natural choice. Results indicate that the proposed method is at least comparable to other SOTA approaches on those problems on the tasks adressed.
Strengths: Strengths :
- The theory of ODEs on Lie Groups and the associated geometric integration schemes is nicely put to use, and is potentially applicable to any homogeneous space. It would be quite nice to try and extend all this work to hyperbolic spaces, which have been used a lot in ML recently, and happen to also be homogeneous spaces.
- I found the paper well written and enjoyable to read.
- The example of the sphere is compelling and leads to the specification of stochastic processes defined through SDEs on the sphere both for the prior and posterior. The tools introduced lead to a new prior distribution for stochastic processes on the sphere, and tractable posterior distributions shown to be usable in a variational inference context.
- An extensive experimental study is carried out on different standard datasets. The proposed framework is applicable to a number of different tasks involving time series : classification (of each time step or of the global sequence), interpolation, regression. The method proves competitive with respect to SOTA approaches for each task.
Weaknesses: Weaknesses :
- As often in works involving spherical, or more generally hyperbolic latent space, the motivation of using them is not always crystal clear for all applications. For data involving some kind of periodicity, as most of those tested here (pendulum, rotating MNIST), this makes sense, but for others it is not so obvious why one would want to use spherical latent spaces. Would this method work for e.g. time series prediction of chaotic data, e.g. the Lorenz system ? However this is a minor remark. Once the need for a spherical latent space has been established, the proposed work extends successfully variational inference for sequential data to this setting.
- I find it a bit disappointing not to find an application of the proposed method to uncertainty quantification, or at least one where the capacity to sample from the posterior process is not put to use. To me, this is the main interest of working with latent SDEs instead of ODEs : one gets a sequential generative model capable of sampling, computing expectations, covariance or other statistics. This could be showcased e.g. through a simple experiment of forecasting time series, showing predictions and confidence intervals.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Are the error bars reported in the tables computed though multiple training sessions or simply several samples of the posterior, for a given trained model ?
- On a related note, I am not sure how the model is adapted for tasks such as per-frame classification. I get that the purpose of the h function is to potentially map to another space, e.g. using a softmax for classification problems, and I imagine the second term of the ELBO is the one that needs to be adapted depending on the model. It could be useful to detail one example (e.g. classification) and explaining what eq. (2) becomes in that case. Could h be learned jointly with the posterior instead of being assumed to be known ?
- In figure 1, I would have been interested in seeing a few trajectories of posterior samples that do not have a constant label, to see if the proposed intuition for the trajectories still holds in that case.
- How does the neural net used to fit the parameters of the initial condition given by the spherical power law constrain them to be positive for the concentration parameter and on the sphere for the location parameter ?
- Note that Figure 2 does not display properly on all the different pdf readers I tried. For one of them, I cannot see the boxes and arrows, but just the text, which makes the figure hard to read.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: - One of the main limitations of the work is the lack of motivation of using SDEs/generative models instead of simply ODEs/deterministic models, if the sampling capacity of the model is never really put to use, while in many domains involving time series uncertainty quantification is a key scientific issue (see above for detailed comments on this point).
- Another minor limitation is the limitation of a geometric Euler Maruyama solver for the SDE, while more accurate solvers (of Runge Kutta Munthe Kaas type) exist. Those are not used so as not to end up with a too much computational burden, but I wonder how much of a problem this would be. More generally, I also would like to have an idea of the required computational time/complexity of the proposed model, and how much it would depend on the used solver.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we thank the reviewer for the positive feedback! The suggestion to extend our method to hyperbolic spaces $\mathbb H^n$ is quite interesting and opens up a new direction of future applications that we did not think of so far, e.g., relativistic dynamics in Minkowski space-time. The hyperboloid model of $\mathbb H^n = O(1,n)/(O(1) \times O(n))$ enables defining a stochastic process on $\mathbb H^n$ via an SDE in $O(1,n)$ or its identity component $SO^+(1,n)$, similar to the presented case of $\mathbb S^n$ and $O(n)$. In fact, $O(1,n)$ is also a quadratic Lie group and nicely fits into our setting.
> ... Would this method work for e.g. time series prediction of chaotic data, e.g. the Lorenz system? ...
We fully agree that the choice of latent space geometry should be informed by the data. For data that involves some kind of periodicity, a spherical latent space is a good choice, but for chaotic data, this is less clear. To investigate the reviewer's suggestion regarding the Lorenz system, we used our latent dynamic model to replicate the experimental setup of [Xi et al., 2020] for fitting a *stochastic* Lorenz attractor. As can be seen from **R-Fig. 1** in the attached PDF (showing 75 posterior samples), our method is expressive enough to model the dynamics of this system. As a side remark, a second motivation for using a spherical or, more generally, a homogeneous latent space is that it enables using Lie group solvers with stronger convergence guarantees (see [Muniz et al., 2022], [Marjanovic et al., 2018]).
> I find it a bit disappointing not to find an application of the proposed method to uncertainty quantification, or at least one where the capacity to sample from the posterior process is not put to use ...
We agree with the reviewer that leveraging the capability of the SDE model for uncertainty quantification would undoubtedly be very interesting. In the current work, we primarily focused on developing the methodological foundation and limited our experiments to a thorough evaluation of interpolation and regression/classification capabilities on existing benchmark data. A similarly extensive evaluation of uncertainty quantification seemed out of scope. Nevertheless, following the reviewer's suggestion, **R-Fig. 2** in the attached PDF now includes a visualization of uncertainty in the angle predictions of the **Pendulum angle regression** experiment (for one testing instance). While this is a regression and not a forecasting problem, it highlights XRoq's point of the model's capability to assess uncertainty. Qualitatively, uncertainty is higher in regions where the angle prediction is less accurate. We will include this figure (and additional visualizations of this kind) in a final version and will more prominently point out the possibility for uncertainty assessment.
> Are the error bars reported in the tables computed through multiple training sessions or simply several samples of the posterior, for a given trained model?
The error bars in the tables (e.g., 0.5e-3 for pendulum regression) are standard deviations computed over **5** training runs (with different random seeds). The standard deviations with respect to sampling from the posterior are lower (0.05e-3). We will make this clear in a final version.
> On a related note, I am not sure how the model is adapted for tasks such as per-frame classification ...
> Could h be learned jointly with the posterior instead of being assumed to be known?
We apologize for the confusion and refer the reviewer to our **General Response** section for full clarification. In short, $h$ can be removed from the preliminaries section, as our variational setting can be described without it. We intended to introduce the observation model in liaison with the more common approach of simply stating the generative model, but this caused more confusion than it helped.
Regarding a detailed description of our setting on one example: let's take the pendulum angle regression task: in that case, the decoder part of our model (1) yields $p(\mathbf{x}(t)|\mathbf{z}(t))$ modeled as a Gaussian distribution with its mean representing the reconstructed observation (i.e., pendulum images) and (2) an additional neural network (as in [Schirmer et al., 2022]) maps latent states $\mathbf{z}(t)$ to the pendulum's angle. Deviations from the desired ground truth angle are measured via an MSE loss. Hence, the overall optimization objective becomes *MSE + negative ELBO*.
>In figure 1, I would have been interested in seeing a few trajectories of posterior samples that do not have a constant label, to see if the proposed intuition for the trajectories still holds in that case.
Following your suggestion, we selected two trajectories with constant class labels and **one** trajectory with a label switch. **R-Fig. 4** (right) in the attached PDF shows that the latter has a larger drift component (as it needs to cross decision boundaries). The coloring indicates the *predicted* class label (with a 30\% error in the label-switch case and 0\% error in the constant-label case). Quantitatively, **R-Fig. 4** (left) shows the distribution of path KL divergences *with* and *without* label switches, highlighting that trajectories *with* label switches indeed deviate more from the driftless prior.
> How does the neural net used to fit the parameters of the initial condition given by the spherical power law constrain them to be positive for the concentration parameter and on the sphere for the location parameter?
To ensure positivity of the concentration parameter ($\kappa$) of the power-spherical distribution, we take the square. Initially, we experimented with an exponential mapping but found that to be too aggressive. The location parameter is divided by its norm.
> Note that Figure 2 does not display properly on all the different pdf readers I tried ...
Thank you for pointing this out. We will obviously fix this issue in a final version. | Summary: The authors are interested in learning neural SDE models. Instead of parameterising arbitrary latent SDEs, the authors restrict their attention to homogeneous spaces, and in particular the unit sphere, in order that they can leverage the transitive group (the Lie group) to construct an SDE in the space in terms of an SDE in the Lie group, whose logarithm is a linear SDE, which leads to convenient solutions. The major advantage of this is using a straightforward discretise-then-optimise approach without an explosion of computational cost. They perform a thorough evaluation against other neural ODE/SDE methods, and show competitive performance despite the more restrictive form of the latent SDE.
Post-discussion: The major concern has been addressed. The authors have pledged to extend their evaluation of the efficiency of their method to other experiments in the paper, which will be sufficient to address that part.
Strengths: 1. (major) To my knowledge the approach is novel and markedly different to other latent SDE methods, but is applicable to many of the same tasks, so is highly relevant.
2. (major) The evaluation is generally very thorough (though see weaknesses), with a wide range of related methods and different tasks evaluated.
3. (major) The code is available, which improves reproducibility, and is based on a widely used framework (pytorch) which improves possible impact.
4. (minor) The paper is quite clearly written, including good discussion of the empirical findings.
Weaknesses: 1. (major) The method is claimed to be efficient for learning, but no evidence is provided to support this claim -- the experiments as they are demonstrate that the method can produce a test performance which is competitive with apparently more flexible approaches, and that the relative performances may depend somewhat on the task. For the reader to judge how far this method is more efficient, a quantitative evaluation of the time for learning for different models would help.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: typos:
* lines 25, 58: Paramaterizing -> parameterizing
* line 178: extra $≤$
* line 180: arithmetics -> arithmetic
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limmitations are well discussed explicitly in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >(major) The method is claimed to be efficient for learning, but no evidence is provided to support this claim -- the experiments as they are demonstrate that the method can produce a test performance which is competitive with apparently more flexible approaches, and that the relative performances may depend somewhat on the task. For the reader to judge how far this method is more efficient, a quantitative evaluation of the time for learning for different models would help.
We agree that additional empirical evidence is required to demonstrate and quantify that our method is more efficient. We will add a subsection to discuss computational/runtime aspects.
Importantly, during the author's response period, we already ran a careful runtime comparison on the Rotated MNIST interpolation task, with results providing evidence in support of our claim. We kindly refer the reviewer to our **General Response** section for a detailed discussion and, in particular, to **R-Tbl. 1** in the attached PDF. Overall, our method is on par in terms of runtime with a latent ODE model of comparable size (#parameters) and substantially faster than the conceptually closer (and more flexible) latent SDE approach of [Xi et al., 2020], both evaluated using an Euler(-Maruyama) scheme.
We also like to thank the reviewer for pointing out typos in the manuscript.
---
Rebuttal Comment 1.1:
Comment: Thanks, this is helpful to see. Just a few short follow ups.
1. Will you produce similar results for the other datasets?
2. For R Fig 3, could you include the uncertainty estimates? (since you have run with five initialisations)
3. It seems like the main efficiency you gain is vs other latent SDE methods, but maybe the motivation for using SDE models over ODE models is not so clear in the current manuscript, but I think this is already satisfactorily addressed in your response to reviewer XRoq.
---
Reply to Comment 1.1.1:
Comment: First, we thank the reviewer for the prompt response.
**ad 1)** Yes, we will produce similar results for the other datasets in a final version (assuming the reviewer is referring to the runtime experiments).
**ad 2)** We replotted **R-Fig. 3** over all 5 runs, with very little runtime variation per approach. Unfortunately, we cannot update the attached PDF at this point, but we will include such a figure in a final version. Specifically, we will show the mean across all runs and shade the standard deviation (as error bars are hard to see due to the large number of points on the $x$-axis.)
**ad 3)** It is correct that our main efficiency gain is wrt. other SDE methods. Following your (and Xroq's) suggestion, we will update our manuscript to more prominently point out the utility of being able to sample from the (latent) posterior process, e.g., in the context of uncertainty assessment. | Summary: The paper provides an affirmative answer to a very natural and intriguing question: Can we simplify the underlying latent model describing the dynamics of a temporal process so that it can overcome the computational and technical challenges with neural ODEs/SDEs while accurately modeling the real-world phenomenon? The authors propose SDE models that arise from the action of a Lie group on a homogeneous space instead of an arbitrary SDE. They also demonstrate at-par performance of their approach on benchmark regression, classification and interpolation problems when compared to the existing methods.
Strengths: This work is really interesting as it opens a new direction of research that could reduce the model's degrees of freedom and yet achieve (or nearly) the sota. It may encourage machine learners to leverage recent developments in the SDE literature and further simplify learning a time series phenomenon. The paper is well written.
Weaknesses: 1. The paper claims in the introduction that their approach significantly reduced computing efforts.
“ However, .......... computing gradients.”
However, there is no comparison with other approaches I could find in the numerical section. Also, a mathematical discussion on the reason behind computational gain is absent.
2. I think the paper would benefit from adding more discussion on the statement below in the main paper.
“Numerical solutions to such an SDE are computed with a simple one-step geometric Euler-Maruyama scheme for which the “discretize-then-optimize” strategy of backpropagating gradients during learning is not a limiting factor."
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
1. Why do we need such a specific SDE form for G_t in the display (4)?
2. Why do we not need a reparametrization trick for the proposed approach? I found no discussion on this in the paper (except a standard comment in line 111).
Minor.
You followed [47] (in the paper)or [48](in the appendix) for preprocessing the human activity dataset. Which one did you follow?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper includes a discussion on the limitations and possible societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Reviewer GYzQ identified two weaknesses in our submission that we will address below.
> The paper claims in the introduction that their approach significantly reduced computing efforts ...
To address the remark on significantly reduced computing effort, we reran our experiments on the rotated MNIST dataset and carefully compared the training time of our method to the latent ODE and SDE models from [Rubanova et al., 2019] and [Xi et al., 2020], respectively. We report the results in **R-Tbl. 1** & **R-Fig. 3** of the attached PDF, and refer the reviewer to our **General Response** section for a detailed discussion.
>I think the paper would benefit from adding more discussion on the statement below in the main paper.
>“Numerical solutions to such an SDE are computed with a simple one-step geometric Euler-Maruyama scheme for which the “discretize-then-optimize” strategy of backpropagating gradients during learning is not a limiting factor."
The word choice in this paragraph is somewhat unfortunate. The phrasing is too general and should rather refer to the actual use case scenario. We will adjust this accordingly in the final version. As we need to backpropagate gradients through each solver step, memory complexity scales linearly with the number of (fixed) time steps multiplied by the amount of memory consumed per step (which depends on the latent space dimensionality). Our statement in the manuscript should be understood in the context of the **latent** dynamics setting, where typically, a low dimensionality of the latent space is expected to suffice for good performance on downstream tasks (as shown in our experiments). While each step of the solver is quite simple (due to the construction of our SDE), there may well be limitations in using the “discrete-then-optimize” strategy once the latent space dimensionality reaches a certain point. We will rephrase (i.e., tone down) this statement and add more discussion on that issue.
>Why do we need such a specific SDE form for $G_t$ in the display (4)?
We want to clarify that we do not state that this form of an SDE for $G_t$ is necessary. We restrict our SDEs to the form as in Eq. (4) because it constitutes an easy and natural way to define a stochastic process $G_t$ that evolves in the Lie group in terms of drift and diffusion residing in the Lie algebra. Moreover, for SDEs of this form, we can rely on numerical solvers from [Marjanovic et al., 2018]. On the tasks and datasets considered in our work, SDEs of such a form appear to be sufficient to achieve competitive performance (as noted by the reviewer). Nevertheless, the formulation in Eq. (4) is quite general. The restrictions on the coefficients $V_0,V_1,\dots,V_m$ specified in Eq. (5) only state that $V_1,\dots,V_m$ are in the Lie algebra and that $V_0$ is determined by an additional element in the Lie algebra that needs to be adjusted by a correction term to cancel out the stochastic drift away from the Lie group.
> Why do we not need a reparametrization trick for the proposed approach? I found no discussion on this in the paper (except a standard comment in line 111).
We apologize for not being clear enough on this point. The short answer is 'yes'; we do need a reparametrization trick. To be more specific, sampling from our posterior (path-) distribution on the sphere consists of two steps. First, we sample an initial value from a power-spherical distribution with location parameter $\boldsymbol{\mu}$ and concentration parameter $\kappa$. Importantly, this distribution allows for a reparametrization trick which we use for training. Second, we numerically solve an SDE in the homogeneous space that starts at the sampled initial value with a geometric Euler-Maruyama scheme. This also uses a reparametrization trick, as with each update step, a random matrix with elements $\sim \mathcal N(0,1)$ is multiplied with $\sqrt{\Delta t}$ and a learnable parameter $\sigma^{\boldsymbol \phi}$ to realize matrix elements $\sim \mathcal N(0,(\sigma^\boldsymbol \phi)^2 \Delta t)$. We will add this discussion to the appendix.
> Minor. You followed [47] (in the paper) or [48] (in the appendix) for preprocessing the human activity dataset. Which one did you follow?
Thank you for pointing this out. We followed [47] and will update the manuscript accordingly.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your thorough response and for addressing all my concerns. | Rebuttal 1:
Rebuttal: We like to thank **all** reviewers for their overall positive feedback, valuable comments and suggestions!
While we address all issues point by point per reviewer, we first like to clarify issues that are common to (almost) all reviews: (♠) to substantiate the computational claims from the manuscript with a **runtime comparison**, (♥) to resolve shortcomings in the **presentation of our preliminaries** and (♦) to clarify the **optimization objective** for per time-point classification/regression experiments.
---
*We refer to the following works in our rebuttal and will denote figures and tables from the attached PDF as R-Fig. & R-Tbl. XXX, respectively.*
**[Rubanova et al., 2019]** Y. Rubanova, R.T.Q. Chen, and D. Duvenaud.
*Latent ODE for irregularly-sampled time series*. In: NeurIPS 2019.
**[Xi et al., 2020]** X. Li, T.-K. L. Wong, R.T.Q. Chen, and D. Duvenaud. *Scalable gradients for stochastic differential equations*. In: AISTATS 2020.
**[Schirmer et al., 2022]** M. Schirmer, M. Eltayeb, S. Lessmann, and M. Rudolph. *Modeling irregular time series with continuous recurrent units*. In: ICML 2022.
**[Marjanovic et al., 2018]** G. Marjanovic and V. Solo. *Numerical methods for stochastic differential equations in matrix Lie groups made simple*. IEEE Trans. Autom. Control., 63(12):4035–4050, 2018.
**[Muniz et al., 2022]** M. Muniz, M. Ehrhardt, M. Günther, and R. Winkler. *Higher strong order methods for linear Itô SDEs on matrix lie groups. BIT Numer. Math., 62(4):1095–1119, 2022.*
**[Itô, 1975]** Kiyosi Itô. Stochastic Calculus. Lect. Notes Phys., 39:218–223, 1975.
---
(♠) **Runtime comparison.** To back up our computational claims, we ran additional experiments to assess the runtime and efficiency of our method, summarized in the table below (and **R-Tbl. 1**).
| | Runtime/Batch [s] | Test MSE $\left(\times 10^{-3}\right)$ |
| ---------------------------- | ----------------- | -------- |
| LODE (Rubanova et al., 2019) | 0.053 $\pm$ 0.004 | 14.9 $\pm$ 0.275 |
| LSDE (Xi et al., 2020) | 0.112 $\pm$ 0.009 | 14.0 $\pm$ 0.543 |
| Ours - $(\mathbb{S}^{n-1},\mathrm{SO}(n))$ | 0.055 $\pm$ 0.005 | 11.2 $\pm$ 0.573 |
| | | |
| Ours - $(\mathbb{R}^{n},\mathrm{GL}(n))$ | 0.056 $\pm$ 0.008 | 12.9 $\pm$ 0.854 |
This comparison is done on _Rotating MNIST_, using the same architecture (encoder/decoder) as in Sec. 4.2 of the manuscript, but the latent dynamic models vary. We compare our approach to 1) a latent ODE (_LODE_, as in [Rubanova et al., 2019] and 2) a latent SDE (_LSDE_, as in [Xi et al., 2020]. All models have approx. 450k parameters. Runtime/Batch refers to the wall clock time per **forward+backward pass** (with batch size 50), computed from a list of all SGD update steps from 5 randomly initialized training runs. We also list the final test MSEs, averaged over these 5 runs. For a fair comparison, we always use *Euler's method* as ODE/SDE solver with a fixed step size of 1/16 (since 16 time points are available in [0,1]).
Overall, although our approach implements a latent **SDE**, runtime per batch is on par with the latent ODE variant and substantially lower than the more flexible latent SDE approach from [Xi et al., 2020] (which is closest in terms of modeling choice). Notably, decreasing the step size (e.g., to 1e-2) for latent ODE/SDE did not noticeably improve performance (MSE) but increased runtime linearly.
To account for potentially different convergence speeds, we also present loss curves in **R-Fig. 3**. The left plot in this figure shows *training MSE *vs.* epoch* and reveals that our method needs fewer SGD updates to converge, whereas LSDE and LODE converge at approx. the same rate. The right plot accounts for different runtimes per batch and reveals that our method needs only around half of the training time as LSDE.
*Similar runtime measurements for all experiments will be added to a final version.*
(♥) **Presentation of preliminaries.** We agree that our current presentation of the preliminaries in Sec. 3.1 could be misunderstood, especially with the introduction of $h$ in Eq. (1). The arguably more common approach (which we will follow) is to state the data generation process as (1) sampling a realization of a path-valued latent variable $\mathbf{z}$ from a suitable parametric prior $p_{\boldsymbol{\theta}^*}(\mathbf{z})$ and subsequently (2) sampling a realization of a path-valued observation $\mathbf{x}$ from some conditional distribution $p_{\boldsymbol{\theta}^*}(\mathbf{x}|\mathbf{z})$. This makes the use of $h$ obsolete. We apologize for the confusion and will adjust Sec. 3.1 accordingly.
(♦) **Clarification of optimization objective(s).**
We first note that our data (irrespective of the task) always contains multiple series of time-indexed observations and that during training, we minimize the negative ELBO, consisting of a KL divergence term between posterior and prior latent path distributions and a log-likelihood term $\log p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{z})$. If additional supervision is available, a corresponding loss term is added, e.g., (per time-point) cross-entropy for classification or (per time-point) MSE for regression. These losses are computed on the output of an additional neural network (typically a two-layer MLP) that maps the **latent states** $\mathbf{z}$ (*not* the reconstructions) to the response variable(s). In practice, when evaluating $\log p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{z})$, we model the conditional distribution at each time point as a Gaussian distribution. However, this is just one of many possible choices.
The only exception to the statement above is the **Human Activity** experiment, where we follow [Rubanova et al., 2019] for fair comparison and train solely with cross-entropy and the KL term, but without $\log p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{z})$.
Pdf: /pdf/a386163ddea3b49f1afd156fe23b237ec46278d4.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Towards Evaluating Transfer-based Attacks Systematically, Practically, and Fairly | Accept (poster) | Summary: This paper introduces a benchmark, called TA-Bench, for transfer-based attacks. The authors implement 30+ transfer-based attack methods that are mostly proposed in the last 3 years. This paper takes several aspects of transfer-based attacks into consideration, including augmentation, optimizer, substitute model training, and generative modeling.
Strengths: 1. Solid work with a considerable amount of experiments. I believe that this work, as its topic suggests, will bring new insights in systematically, practically, and fairly evaluating transfer-based attacks.
2. Apart from the benchmark, this paper provides some useful takeaways in Line 338-345.
Weaknesses: This paper would better be submitted to the benchmark track.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The recent paper "Reliable Evaluation of Adversarial Transferability" by Yu et al. also provides a benchmark for evaluating the transferability of adversarial examples. Could you please compare your work with theirs?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the positive feedback. Except for the comment about "submitting to the benchmark track" which is addressed in our general response, all the comments are replied to as follows.
> The recent paper "Reliable Evaluation of Adversarial Transferability" by Yu et al. also provides a benchmark for evaluating the transferability of adversarial examples. Could you please compare your work with theirs?
**A:** We appreciate the point to this paper. It seems that this paper is available online after the NeruIPS submission deadline, thus it was not discussed in the paper. Comparing with it, the advantages of our TA-Bench include at least:
* More evaluated methods.
* More pairs of substitute-victim models (forming more comprehensive evaluations).
* A new and advanced back-end optimization method, _i.e._, UN-DP-DI$^2$-TI-PI-FGSM.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their response. The rebuttal and global response have fully addressed our concerns and we have no follow-up questions. We will keep our score and recommend accepting this paper.
---
Reply to Comment 1.1.1:
Title: Thanks to the reviewer
Comment: Dear Reviewer NrCP,
We are pleased to know that your concerns have been fully addressed and thank you for recommending the acceptance of our paper!
Best regards,
Authors | Summary: This paper establishes a transfer-based attack benchmark (TA-Bench) so that researchers could take advantage of this to comare different methods systematically, fairly and practically. TA-Bench implements 30+ methods and evaluate on 10 popular victim models (architecture) on ImageNet.
Strengths:
1.The paper is well-written and easy to follow.
2.The TA-Bench has practical value for transfer-based attacks, since it implements 30+ methods and provides comprehensive evaluations.
Weaknesses:
1.The benchmark focuses on ImageNet. Is it possible to extend to other datasets, smaller (MNIST) or larger (JFT300)?
2.Typos:
1)The term “TA-bench” and “TA-Bench” should be consistent.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
1.This paper did a very good job. TA-Bench is comprehensive. I score 6 for this paper. But this paper may fit better towards “NeurIPS Tack Datasets and Benchmarks”.
2.In table 1, what is the meaning of different colors, e.g., grey, black, bald black?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations:
The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the positive feedback. Except for the the questions about the the D&B track which is answered in our general response, all your comments are replied as follows.
> The benchmark focuses on ImageNet. Is it possible to extend to other datasets, smaller (MNIST) or larger (JFT300)?
**A:** We focus on ImageNet first for several reasons. First of all, almost all papers studying transfer-based attacks **developed** and **evaluated** their methods on ImageNet. Only a few papers conducted evaluations on smaller scale datasets, such as CIFAR-10, as well. This is because ImageNet contains large-scale diverse images, making the data distribution more representative. We ensure a fair comparison on ImageNet first to benefit understanding what's effective. Another reason why we focus on ImageNet first is that vision transformers generally require training on ImageNet, and the dataset offers more options for choosing substitute/victim models, enabling a more comprehensive understanding of the performance of an attack, as discussed in Section 4.3. After consolidating all results on ImageNet, we will consider performing evaluations on CIFAR-10, too. MNIST is too toy to be used for adversarial experiments now. As for JFT-300M, to the best of our knowledge, it is not publicly available yet.
> Typos: 1)The term “TA-bench” and “TA-Bench” should be consistent.
**A:** Thanks for pointing out these typos. We will fix them in an updated version of the paper.
> In table 1, what is the meaning of different colors, e.g., grey, black, bald black?
**A:** In Table 1, the number is colored to be grey once the performance is worse than the back-end attack (_i.e._, I-FGSM and UN-DP-DI$^2$-TI-PI-FGSM). The bold black indicates the best performance among all methods in the same category. We will highlight this information in the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. I am satisfied with the results.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer QeCx,
We would like to thank you for the positive feedback and for responding to our rebuttal!
Best regards,
Authors | Summary: This paper explores the problem of adversarial transferability evaluation on image classification tasks. The authors feel that there are a large number of migration attacks, but this community lacks a standard benchmark. Therefore, this paper establishes a transfer-based attack benchmark (TA-Bench) which implements 30+ methods and then evaluate and compare them comprehensively on 10 popular substitute/victim models on ImageNet.
Strengths: 1. The benchmark of this paper integrates many transfer attacks.
2. TA-bench helps researchers to carry out research more easily and in-depth.
Weaknesses: 1. We acknowledge the workload and potential of this paper, but compared to previous work, the proposed TA-bench does not consider adversarial defense methods at all, which leads to the wrong robustness assessment. We think that some of the latest pre-processing defense and adversarial training models should be taken into account, at least ens3-adv-Inception-v3, ens4-adv-Inception-v3, ens-adv-Inception-ResNet-v2, HGD, R&P, NIPS-r3, JPEG, FD, ComDefend, NRP, RS, Bit-Red, DiffPure). We will not give references here, because it is very common in this community.
2. The scalability of TA-bench is not strong enough. Transfer attacks have recently paid more attention to the research of targeted attacks and are a future research direction, but TA-bench lacks corresponding evaluations, which restricts its contribution. We believe that the category relationship of images should be considered in the screening of data sets, and an evaluation system for targeted transfer attacks should be constructed.
3. The motivation for this paper is not very sufficient. The set of models in the NIPS2017 adversarial competition is indeed relatively limited, but I-FGSM does show the most powerful attack performance as an optimized backend (after integrating with other methods), and the experiments in this paper also illustrate this point. We think it is best for the authors to start with the challenge of transfer attacks to build benchmarks, including but not limited to targeted attacks, ViT to CNN.
4. More and more latest foundation models adopt the transformer architecture, but in this paper, the discussion of the adversarial transferability of ViT and CNN is not very sufficient. This is also a recent research hotspot.
5. Also, we are a little bit less sympathetic to the taxonomy of this paper. We feel that it can be divided into more detailed classifications such as advanced optimization, data enhancement, model integration, feature attack, and network structure, rather than just "Gradient Computation" Methods and "Substitute Model Training" Methods".
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See Weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors did not discuss limitation and ethical risks in the main body.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the feedback. Except for the questions about defense methods, targeted attacks, and the taxonomy which are answered in our general response, all your comments are replied to as follows.
> We think it is best for the authors to start with the challenge of transfer attacks to build benchmarks.
**A:** TA-Bench is motivated by the fact many existing transfer-based papers show a lack of systematically, fairly, and practically experimental evaluations. First of all, different architectures should be considered when setting up the substitute and the victim models, while in many recent papers, only convolutional networks were considered. Second, many methods verified their effectiveness only with a very basic optimization back-end, _i.e._, I-FGSM, despite the existence of more advanced optimizers and input augmentations. It is hence unclear whether the technical innovation really works in practice when consolidating all efforts in transfer-based attacks. We believe these problems hampered the development of adversarial machine learning and it motivates us to establish such a benchmark (_i.e._, TA-Bench) for evaluating transfer-based attacks.
We agree that although bootstrapping a challenge is a point to establish a benchmark, our motivation also well shows the necessity of building the benchmark.
> The discussion of the adversarial transferability of ViT and CNN is not very sufficient.
**A:** In our discussions about cross-architecture transferability (in Section 2 in our supplementary material), we have shown the performance when adopting transformers/CNNs as the substitute model, as the victim model, or both. Besides the tested transformer models and convolutional models, it is easy to incorporate more models in our benchmark.
---
Rebuttal Comment 1.1:
Comment: In previous work, we always test the performance of defense methods, which is crucial for evaluating adversarial robustness. Likewise, targeted attacks pose a more significant threat to real-world applications. We still believe that this benchmark will definitely lead to the development of the field, but these two items are still not to be ignored. These are all available in the data set of our commonly used NIPS2017 adversarial competition. So we still need this work to enrich these contents further. Our scores will not affect the acceptance of this paper, but we still hope that the authors can implement these to guide the correct adversarial robustness evaluation.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Yj8S,
Thanks for responding to us and recognizing our contributions as "definitely lead to the development of the field".
For the comment about defense, we would like to gently remind that, as reported in our general response, the experiment on defensive models has been carried out and many results have already been given in the attached pdf in the general response. We will also incorporate these results into our paper as mentioned. As for targeted attacks, we will test them when appropriate, as suggested.
Best regards,
Authors
---
Rebuttal 2:
Title: Your further feedback would be greatly appreciated.
Comment: Dear Reviewer Yj8S,
Thanks again for your comments! We have detailedly provided responses to all your comments. If there are any remaining concerns about our paper, we are more than delighted to address them. Have a nice day.
Best regards,
Authors | Summary: The paper proposes a new benchmark of techniques designed to increase the transferability of adversarial examples. The paper implements more than 30 of these techniques to compare the success rate of the corresponding attacks. The paper identifies several flaws of current evaluation protocols, for example, not considering the pre-processing pipeline.
Strengths: - The engineering work done is impressive. Benchmarking 30 techniques for transferability is a testimony of good view of the field of transferability, and of a strong software development work.
- The paper correctly and fairly evaluates techniques, regarding best practices developed recently in \[59\]. In particular, the paper takes a particular care to control the effect of the number of gradients computed per iteration, which should be controlled for fairness.
- Some choices in categorisation are sound. For example, evaluating separately techniques based on generative modelling makes sense.
- The related work is sensible and extensive. For example, the paper correctly acknowledges and iterate about the most related work \[59\], another benchmark paper on transferability. Unfortunately, the review of techniques is not exhaustive. And the paper would benefit from a clear explanation on how the 30 methods were selected. For example, the paper could exhaustively list all the techniques for transferability published at top-conferences.
- The paper proposes a more consistent way to evaluate the complementary of techniques than what is used currently. But there are some limitations (see below).
Weaknesses: - An exhaustive review of techniques published at top-conferences (as done for review papers) would clarify why some techniques are included and not other. For example, the paper titled "Understanding and Enhancing the Transferability of Adversarial Examples" was the first to average gradients over (Gaussian) noise in 2018. Moreover, "Learning Transferable Adversarial Examples via Ghost Networks" (AAAI-20), "On Success and Simplicity: A Second Look at Transferable Targeted Attacks" (NeurIPS 2021), "Efficient and Transferable Adversarial Examples from Bayesian Neural Networks" (UAI 2022), "Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation" (NeurIPS 2022) could be considered.
- No code is provided as supplementary materials. It is impossible to evaluate the claimed contribution of a unified codebase. Open-sourcing the code is particularly important for benchmark papers, where its value depends on the cleanness and extensibility of its code base.
- The usage of cross-validation is weak and not systematic. I think that the benchmark should provide a standardised protocol to tune hyperparameters and select any elements with cross-validation. For example, the paper should select the best combination of augmentation and optimizer by cross-validation (section 4.2). This is a significant flaw of the current paper. Some tuning of the hyperparameters is currently done by on a set of hold-out target models (Section 4.3). But these models are removed from the test set of targets models. Therefore, there is an overlap between validation and targets between the experiments. The test evaluation is weaker because performed on fewer models. It would be better to use additional models that would be used only for validation. As the code is not shared, I cannot check if the codebase includes an easy way to tune the hyperparameters of any techniques implemented in the benchmark. From my experience, tuning the hyperparameters is of first importance when changing some experimental settings. In particular, selecting the SGM hyperparameter when changing the source architecture.
- A limitation of the current benchmark is that the training dataset is supposed to be known. I think this overestimates significantly the success rates (I suspect that the effect is larger than the one of the pre-processing). I understand removing this hypothesis is very costly computationally. But the paper should therefore not position itself as a more realistic evaluation. In addition, I think training and distributing source and target models without this hypothesis would add a high value to the benchmark (in practice, splitting ImageNet in two or introducing some distributional shift between the datasets).
- The paper does not extend the evaluation of some techniques to other source architecture. For example, in Table 1, most columns are empty for substitute model training. I understand that some techniques are too costly to train from scratch (RFA for example), but LGV and MoreBayesian should be feasible since they require only a few additional epochs from a pretrained model.
- The categories of techniques used are debatable. For example, I think that LGV and MoreBayesian should be categorised as gradient computation. Both techniques are training-based, but their primary objective is *not* to train a better single base substitute model, but to augment an existing base pretrained model and obtain several slightly modified models (to be used one per gradient). The "augmentation" category should be renamed "input augmentation" or "data augmentation" to clarify. I am not sure why input augmentation techniques and optimizers techniques are evaluated together as a single category of techniques, and "gradient computation" techniques are not evaluated together with optimizers too. In fact, numerous "gradient computation" techniques also augment the surrogate model (LinBP, GN, SGM, etc.). I think that a more granular evaluation of the category of techniques, category per category, as done in \[59\], would be more advisable.
- I think that this paper would have been a perfect fit for the benchmark and datasets track of NeurIPS. I feel that the impact of the paper is slightly too limited for the main track. The conclusions of the benchmark have a somehow limited significance for the community.
- The paper evaluates the gradient computation category on the best combination of augmentation and optimizer techniques. It would be good to evaluate the relation in both ways: the gradient computation category on I-FSGM vs. the gradient computation category on the best surrogate models.
**Minor comments**
- The paper would benefit from polished writing and improved formatting. Bold should be used with care, only for sparse important keywords. It would be better to highlight entire sentences (or paragraph) with italic instead of bold. The writing should be improved overall. For example, "Similar for IR and TAIG." (l.272) is not a valid sentence, "state-of-the-arts" (l.183) is not a valid word. Exaggeration must be avoided. For example, please avoid familiar and exaggerated formulations like "super close" (l.329). The paper would benefit from a more precise writing. For example, the paper states that "30+" techniques are compared. The exact number of techniques is never mentioned. Some legends are missing some descriptions. For example, the legend of Table 1 does not specify what the grey colour means, and does not specify that the first row is the source model.
- No evaluation of defended target models is performed. This is not necessarily a major issue, but it is advisable to discuss this limitation. Same for targeted/untargeted perturbations.
- I think that the paper would be more clear if it reports success rates instead of accuracies.
- Specifying the step size of I-FGSM relatively to the epsilon norm of the perturbation would be better and improve consistency. For example, use alpha = eps/10 for both Linf and L2 norms.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Could you share the code base (anonymously) to review it?
- Which hyperparameters are currently selected by cross-validation? More details should be provided on how did you tune the hyperparameters (paragraph l.228-l.241).
- Will you publish the code to sample the 5K subset of examples? This code is needed since this subset would need to be rebuilt for each new model training techniques, to ensure fairness. The 5K examples are correctly classified by all existing surrogate models, so it should be the case for surrogate models that will be added later. It is important to specify this in the documentation.
- Which number of random restarts did you use for PGD? I did not find this information in the paper.
- Line 324 states that transformers are better surrogate architecture than CNN. Is it true for all types of targets? I.e. does this observation hold for the BAA (best-case accuracy)? Or is it simply related to the fact that there are more transformers in the set of target models (and we are confused by the average AAA)?
- Can you describe more precisely the training differences, briefly mentioned line 332?
I am ready to increase my score depending on how the authors answer the weaknesses and questions.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Some limitations are not discussed (see above weakness section)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the positive feedback. Except for common questions which are answered in our general response, all your comments are relied to as follows.
> Unfortunately, the review of techniques is not exhaustive.
**A:** Thanks for pointing out these methods. The implementation of "UN" in our paper is equivalent to one of the mentioned methods, and we will highlight it in the revised paper. Evaluation results of the other mentioned methods and several very recent methods (that are available online after the NeurIPS submission deadline) will be added to the paper.
> Hyper-parameters and a standard protocol to tune them.
**A:** Our benchmark does have a protocol to tune hyper-parameters. All important hyper-parameters, including the choice of position for NRDM, ILA, ILA++, LinBP, ConBP, FIA, and NAA, and the scaling factor for SGM (as you have mentioned) can all be tuned on a validation set that consists of 500 examples that do not overlap with the test data. The information will be highlighted in an updated version of our paper. The validation set and the tuning protocol will also be made publicly available.
> A limitation of the current benchmark is that the training dataset is supposed to be known.
**A:** As mentioned in the paper, we consider such a threat model just to keep in line with previous work that developed these transfer-based attacks. To the best of our knowledge, all the compared methods, in their original papers, took a substitute model trained on the same dataset as that used to train the victim models for experiments. While we fully agree that evaluating under a more stringent threat model is insightful, considering that we have made obvious changes to the experiments (including introducing various types of substitute/victim models, adopting more realistic pre-processing, _etc_), further modifying the models into ones trained on independent datasets **might lead to confusion about what leads to a performance change of these compared methods**. In addition, performing such an experiment requires determining many critical factors, such as the number of training images that the attacker is able to collect and even the resources the attacker owns. There are some papers that were written to deal with these problems, and we are more than glad to consider such an experiment in future work if possible.
> In Table 1, most columns are empty for substitute model training.
**A:** The reason why we did not extend the evaluation of "substitute model training" methods is indeed about training cost. Especially, training for these methods requires tuning many additional hyper-parameters, such as learning rate, weight decay, batch size, and $\lambda$ and $\gamma$ in MoreBayesian. Thus, the training cost is high even for LGV and MoreBayesian.
> The gradient computation category on I-FSGM and the gradient computation category on the best surrogate models.
**A:** We have reported the results of the "gradient computation" methods on I-FGSM in Table 1 (upper half), and these results can be easily compared with those on UN-DP-DI$^2$-TI-PI-FGSM. Table 1 also provides results on each surrogate model, thus the performance on the best surrogate models is given as expected.
> The paper would benefit from polished writing and improved formatting.
**A:** Thanks for the kind suggestions. We shall improve the writing and formatting accordingly.
> I think that the paper would be more clear if it reports success rates instead of accuracies.
**A:** As we discussed in lines 168-171 in our paper, we adopt prediction accuracy in evaluating attack performance because it is easier to incorporate other substitute/victim models in the future, since a reasonable calculation of the attack success rate requires benign examples to be correctly classified by all victim models, as suggested by previous papers.
> The step size of I-FGSM.
**A:** We follow the setting of using a step size of 1/255 for $\ell_\infty$ attacks, as in many previous papers.
> Will you publish the code to sample the 5K subset of examples?
**A:** Certainly, we will publish the code and we will further provide the same 5K examples for future evaluations to the public.
> Which number of random restarts did you use for PGD?
**A:** We didn't perform random restarts for PGD. Performing random restarts by $k\times$ times (with a fixed number of a maximal number of iterations) requires reducing the number of iterations within each run by $k\times$, and we observe no performance gain in such a setting.
> Line 324 states that transformers are better surrogate architecture than CNN. Is it true for all types of targets?
**A:** From Table 4 in our supplementary material, we can see that ViT-B shows better cross-architecture adversarial transferability in the sense of worst-case attack performance and average performance (over 5 CNNs, 4 transformers, and an MLP). Yet, in the sense of the best-case performance, ResNet-50 seems better.
> Can you describe more precisely the training differences, briefly mentioned line 332?
**A:** ViT-B used the private JFT dataset for pre-training, and it was trained on ImageNet after pre-training. DeiT-B was trained on ImageNet using a variety of heavy data augmentation and regularizations. BEiT-B was pre-trained using the unsupervised masked image modeling technique and subsequently fine-tuned to be a classification model via supervised learning.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer. I have to admit that it gave me a mixed impression: some points are left unanswered, and for others I did not find the answer satisfactory.
## Points left unanswered
- I do not know how the 30+ techniques were selected. The methodology is missing.
- I understand that the authors were unable to share the code base during the rebuttal. But currently, I cannot evaluate the claims of the paper about the code. Given that the main contribution of the paper is (as pointed by other reviewers) the software engineering work, I feel that I cannot evaluate properly this work as a whole.
- Some limitations regarding the hyperparameter tuning are undressed. Cf below.
## Points not addressed satisfactorily
- The explanations about the impossibility of the level of granularity are far from convincing, since the paper altered the taxonomy designed by [59], and [59] has a more granular level. The paper's taxonomy compares techniques that have different objectives (optimization and input augmentation, for example, or LGV/MoreBayesian and RFA) on the same basis (cf. my review). Directly re-using the taxonomy of [59] would strengthen the paper.
- It is not enough to simply have non-overlapping sets of examples to tune hyperparameters. The sets of target models used to tune HPs should be distinct from the set of target models used to report the final results (cf. my review). Otherwise, the results overfit to some specific targets. This situation of data leakage corresponds to a threat model with query access to the target model. Since the paper positions it-self as a more realistic evaluation of transfer-based black-box attacks, the paper cannot be accepted with such flaw.
- The rebuttal did not list specifically the techniques for which hyperparameters were selected through cross-validation, in the current version of the paper.
- In Table 1, most columns are empty for substitute model training. If tuning HPs is too computationally costly for LGV and MoreBayesian, simply report the results with the original HPs and a star indicating this. I believe that only the most important HPs could be tuned (for example, I doubt that tuning the weight decay is relevant for LGV).
- If no random restart is performed (i.e. a single start from the original image), then the attack must not be called PGD, but I-FGSM (or equivalently BIM).
- A fixed step size of I-FGSM independent of the perturbation norm is highly unlikely to be optimal. Please take into consideration my original remark.
- More generally, I disagree that a benchmark should stay close to the experimental settings of current work, despite their addressable flaws and limitations. The goal of a good benchmark is to set a high standard of evaluation for past and future work.
---
Reply to Comment 1.1.1:
Title: Response to the reviewer (part 1/5)
Comment: We feel sorry that our initial response in the rebuttal fails to convince the reviewer in some points. Since the rebuttal letter is limited to 6000 characters only, we had to compress our response hardly and thus some explanations may seem obscure or incomplete. We would like to thank the reviewer for pointing out comments that require further clarification and response. Please see our further clarification and responses as follows.
> I do not know how the 30+ techniques were selected. The methodology is missing.
**A:** We aim to test all state-of-the-art transfer-based methods for generating adversarial methods (that could compromise a general image classification victim model) published in top tier ML/CV conferences and journals, including NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, TPAMI, TIP, etc. We are very glad to add the missing methods which have been mentioned by the reviewer. If there exist other recent methods that can be compared, we are also more than glad to incorporate them into the benchmark.
> But currently, I cannot evaluate the claims of the paper about the code.
**A:** We hope to address the concerns of reviewers about the code as much as possible. In our general response, we have demonstrated how to evaluate a new method and how to register a new victim model for evaluation using our codebase. We would like to try our best to demonstrate them if there are any sections of the code that are specifically concerned.
> Some limitations regarding the hyperparameter tuning are undressed.
The sets of target models used to tune HPs should be distinct from the set of target models used to report the final results (cf. my review).
**A:** Thanks for further clarifying your concern. We agree that it is important to report the final performance on some victim models which are distinct from those used on the validation set. We collected 15 additional victim models, including a BEiT-L, an EfficientNet-L2, a DeiT-L, a ConvNeXt V2-L, a Swin V2-L, a ViT-L, a CAFormer-B36, a MaxViT-L, an EVA-L, an EVA02-L, a MobileNet V2, a DenseNet-161, a ResNeXt-101, a SENet-154, and a RepVGG-B3, and conducted an experiment on attacking these victim models. For the "substitute model training" methods, the conclusion remains consistent with the observations from Table 1. Specifically, when employing I-FGSM as the back-end, RFA achieves the best AA (i.e., 63.98%), and when applying UN-DP-DI$^2$-TI-PI-FGSM as the back-end, MoreBayesian attains the best AA (i.e., 47.13%).
For the "gradient computation" methods, we show the results in the below tables (in part 2/5 and 3/5). When I-FGSM is applied as the optimization back-end, we observe that the conclusion aligns with the findings in Table 1 of the paper. NAA consistently outperforms other methods on most choices of the substitute model, achieving the lowest AAA (i.e., 79.27%). When introducing UN-DP-DI$^2$-TI-PI-FGSM as the optimization back-end, the top three lowest AAs are achieved using ConvNeXt-B, DeiT-B, and Swin-B as the substitute models, as in Table 1. The best AA is obtained by performing LinBP on the ConvNeXt-B substitute model (i.e., 30.81%, which stands as the second-best in Table 1 and is only 0.20% higher than the best AA), due to slight distribution shift of the tested victim models. It is also noteworthy that, for each substitute model, the attack method that yields the lowest AA almost always remains consistent in the below table and Table 1.
We would like to further explain that we focused on attacking the same victim models as those on the validation set mainly to explore the optimal performance of each method and to gain insights into their optimal performance when different substitute/victim models are presented. To be more specific, considering that we tested with different pairs of substitute and victim models, then the choice of validation victim models (if they are different from the test victim models) will largely affect conclusions that could be drawn. For example, if we were able to employ a ViT-S in the set of validation victim models, then the final performance of attacking ViT-B in practice using any substitute model would likely be better. This makes it difficult to obtain any insights regarding which substitute model should be chosen to gain higher attack success rates in practice, and it will lead to doubt about whether transferring from vision transformers to convolutional networks is really easier than from the opposite direction. We would like to add experimental results in the above table to our paper and highlight these points to avoid misleading.
---
Reply to Comment 1.1.2:
Title: Response to the reviewer (part 2/5)
Comment: | | ResNet-50 | VGG-19 | Inception v3 | EfficientNet v2 | ConvNeXt-B | ViT-B | DeiT-B | BEiT-B | Swin-B | Mixer-B | AAA |
|-------------------------------|:----------:|:----------:|:------------:|:---------------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|
| - Baseline | | | | | | | | | | | |
| I-FGSM | 89.95% | 91.13% | 95.21% | 96.37% | 89.53% | 93.20% | 93.73% | 92.68% | 95.88% | 95.88% | 93.36% |
| - Gradient Computation | | | | | | | | | | | |
| TAP (2018) | 84.03% | 89.09% | 93.36% | 95.72% | 92.93% | 94.66% | 95.38% | 94.66% | 96.54% | 95.73% | 93.21% |
| NRDM (2018) | 83.39% | 85.61% | 88.15% | 97.59% | 96.35% | 96.55% | 96.86% | 96.77% | 96.77% | 93.63% | 93.17% |
| FDA (2019) | 86.43% | 93.09% | 92.23% | 98.69% | 97.36% | 97.51% | 96.94% | 97.69% | 98.09% | 97.93% | 95.60% |
| ILA (2019) | 77.71% | 76.04% | 86.88% | 91.58% | 87.87% | 83.89% | 87.14% | 83.68% | 91.37% | 90.37% | 85.65% |
| SGM (2020) | 76.87% | - | - | 85.64% | 80.16% | 90.84% | 92.02% | 89.59% | 93.42% | 93.90% | - |
| ILA++ (2020) | 75.47% | 73.55% | 91.85% | 89.83% | 86.53% | 81.79% | 86.69% | 82.42% | 90.04% | 88.98% | 84.71% |
| LinBP (2020) | 78.77% | 85.98% | 94.84% | 98.02% | 91.95% | 94.27% | 94.39% | 94.99% | 96.69% | 97.14% | 92.70% |
| ConBP (2021) | 76.61% | 84.77% | - | - | - | - | - | - | - | - | - |
| SE (2021) | - | - | - | - | - | 93.74% | 93.35% | 93.24% | - | 94.96% | - |
| FIA (2021) | **74.01%** | **72.69%** | 87.48% | 90.45% | 83.36% | 81.52% | 84.98% | 85.15% | 89.70% | 86.07% | 83.54% |
| PNA (2022) | - | - | - | - | - | 92.25% | 92.03% | 91.94% | 95.27% | - | - |
| NAA (2022) | 74.50% | 77.62% | **81.59%** | **86.86%** | **73.31%** | **76.57%** | **78.87%** | **74.60%** | **84.58%** | **84.21%** | **79.27%** |
---
Reply to Comment 1.1.3:
Title: Response to the reviewer (part 3/5)
Comment: | | ResNet-50 | VGG-19 | Inception v3 | EfficientNet v2 | ConvNeXt-B | ViT-B | DeiT-B | BEiT-B | Swin-B | Mixer-B | AAA |
|--------------------------|:----------:|:----------:|:------------:|:---------------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|
| - Baseline | | | | | | | | | | | |
| UN-DP-DI$^2$-TI-PI-FGSM | 52.38% | 57.09% | 69.70% | 55.51% | 42.48% | 42.13% | 48.85% | 42.27% | 48.68% | 67.43% | 52.65% |
| - Gradient Computation | | | | | | | | | | | |
| TAP (2018) | 71.84% | 59.21% | 75.30% | 71.15% | 38.75% | 56.97% | 63.83% | 47.73% | 62.06% | 74.42% | 62.13% |
| NRDM (2018) | 60.78% | 63.85% | 77.14% | 63.83% | 51.53% | 64.71% | 74.27% | 61.71% | 82.46% | 76.11% | 67.64% |
| FDA (2019) | 54.96% | 55.67% | 69.87% | 92.98% | 77.22% | 96.78% | 90.49% | 86.03% | 88.45% | 96.69% | 80.91% |
| ILA (2019) | 53.28% | 51.67% | 66.62% | 49.89% | 41.39% | 41.04% | 50.83% | 40.47% | 63.99% | 67.31% | **52.65%** |
| SGM (2020) | **49.87%** | - | - | 51.61% | 33.54% | **39.80%** | 40.91% | **37.29%** | **31.88%** | 61.79% | - |
| ILA++ (2020) | 53.00% | **51.24%** | **66.27%** | **50.34%** | 41.70% | 41.19% | 51.55% | 40.41% | 66.93% | 66.97% | 52.96% |
| LinBP (2020) | 52.97% | 56.00% | 85.62% | 93.87% | **30.81%** | 53.00% | 49.74% | 54.79% | 74.03% | 89.76% | 64.06% |
| ConBP (2021) | 51.69% | 56.04% | - | - | - | - | - | - | - | - | - |
| SE (2021) | - | - | - | - | - | 49.70% | **36.42%** | 38.79% | - | **61.26%** | - |
| FIA (2021) | 53.63% | 59.96% | 69.64% | 79.53% | 57.92% | 51.26% | 53.32% | 64.99% | 64.75% | 74.17% | 62.92% |
| PNA (2022) | - | - | - | - | - | 44.36% | 37.73% | 41.19% | 36.01% | - | - |
| NAA (2022) | 53.93% | 57.44% | 67.18% | 57.69% | 41.89% | 43.77% | 46.53% | 49.65% | 53.72% | 63.04% | 53.48% |
---
Reply to Comment 1.1.4:
Title: Response to the reviewer (part 4/5)
Comment: > The explanations about the impossibility of the level of granularity are far from convincing, since the paper altered the taxonomy designed by [59], and [59] has a more granular level. The paper's taxonomy compares techniques that have different objectives (optimization and input augmentation, for example, or LGV/MoreBayesian and RFA) on the same basis (cf. my review). Directly re-using the taxonomy of [59] would strengthen the paper.
**A:** We appreciate your further comment, but we respectfully insist the taxonomy of our paper.
In our paper, LGV and MoreBayesian are categorized into "substitute model training", together with RFA, since they all propose principled substitute model training/fine-tuning strategies and **require the attacker to own the training data** (which is very different from other methods). By contrast, together with LinBP and SGM, RFA is categorized into "surrogate refinement" methods in the independent work of [59]. We agree that, from a certain perspective, we can say that RFA, LGV, and MoreBayesian also modify the substitute/surrogate models, but we believe that the important difference between their threat models (about access to the training resource and training data) should not be ignored in the taxonomy. Additionally, we deem that separating these methods from "surrogate refinement" methods also provide a more granular taxonomy for these methods.
For methods in the category of input augmentation and optimizer, they share many commonalities, making it reasonable to study them together. To be more specific, they are all inspired by model training techniques that prevent models from getting stuck in local minima, and they are not specific to substitute model architectures and do not require training data, unlike the other methods. In fact, rather than comparing input augmentation methods and the optimizers with each other in isolation, we put them in combination to test their effectiveness, as in [43, 7], considering that it is natural to adopt a combination of these methods (just like in training DNN models). We believe this can be a more reasonable way of testifying methods in this category and we would like to advocate it to future work in the adversarial machine learning community.
> The rebuttal did not list specifically the techniques for which hyperparameters were selected through cross-validation, in the current version of the paper.
**A:** We tuned architecture-related hyper-parameters for transfer-based attacks, including the choice of position for NRDM, ILA, ILA++, LinBP, ConBP, FIA, and NAA and the scaling factor for SGM, since we tested with a variety of substitute architectures and the suggested hyper-parameters in these papers may not be suitable to a different substitute architecture than the ones tested in these papers. We compare the performance of all possible values of these hyper-parameters and chose the best ones on the validation set. We will also include in the paper the specific values of these hyperparameters for each substitute model.
> In Table 1, most columns are empty for substitute model training. If tuning HPs is too computationally costly for LGV and MoreBayesian, simply report the results with the original HPs and a star indicating this. I believe that only the most important HPs could be tuned (for example, I doubt that tuning the weight decay is relevant for LGV).
**A:** We appreciate the suggestion about tuning only the most important hyper-parameters. We are glad to conduct such an experiment. The experiment is ongoing and the results will be added to the revised paper. Note that, as reported by previous work, the training of some tested models requires substantial effort to reach convergence, especially with a modified training objective.
---
Reply to Comment 1.1.5:
Title: Response to the reviewer (part 5/5)
Comment: > If no random restart is performed (i.e. a single start from the original image), then the attack must not be called PGD, but I-FGSM (or equivalently BIM).
**A:** The main difference in the design between PGD and I-FGSM is that PGD initializes the perturbation with a random tensor sampled from a distribution, while I-FGSM initializes the perturbation with a zero tensor as discussed in lines 101-103 in the paper. We would like to highlight in the paper that, without multiple restarts, the difference between these two methods lies only in the initialization, and this is the reason why the performance of I-FGSM and PGD is similar.
> A fixed step size of I-FGSM independent of the perturbation norm is highly unlikely to be optimal. Please take into consideration my original remark.
**A:** We really appreciate the suggestion regarding the exploration of more advanced optimizer with an adaptive step size. Although we followed the setting of previous works in experiments with I-FGSM, we are open to adding additional evaluation as per your suggestion.
> More generally, I disagree that a benchmark should stay close to the experimental settings of current work, despite their addressable flaws and limitations. The goal of a good benchmark is to set a high standard of evaluation for past and future work.
**A:** We fully agree that a high standard of evaluation should be expected. Previous work may have unaddressed flaws and limitations, as you have pointed out, and we have actually tried to demonstrate and address some most critical ones (from our perspective) in this paper, e.g., the pre-processing operations as discussed in lines 194-215, the optimization back-end as discussed in lines 277-282, and the substitute architectures. In the meanwhile, our experiments still inherit some settings from previous work, and we would like to explain that this is designed mainly to ablate other affecting factors when demonstrating those most critical points in the first place. We aim to keep addressing more problems and deliver more messages about the standard of evaluation. Some initial results have been given in the rebuttal and we will add them to our paper, e.g., the results on defensive victim models and the results on victim models different from those have been evaluated on the validation set. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for the valuable feedback. Our responses to some common questions are given as follows.
> The codebase.
**A:** As promised in the paper, the codebase will be made publicly available. With the codebase, APIs are directly provided for evaluating attacks using substitute/victim models in the paper. Users could also register their own substitute models. Unfortunately, it is not allowed to upload supplementary material during the rebuttal period (links to external pages are also not permitted), and we here try to provide some code snippets of our TA-Bench. For instance, the way of evaluating the transferability of adversarial examples crafted by any new method:
```
import tabench
evaluator = tabench.Evaluation(data_dir="/path/to/adv/examples", mode="standard")
evaluator.evaluate()
```
If one aims to register a new model as the victim model, then the implementation can be simply formatted as follows.
```
import tabench
##### For victim models that exist in timm
victims=["deit3_huge_patch14_224", "vit_small_patch16_224"]
evaluator = tabench.Evaluation(data_dir="/path/to/adv/examples", mode="custom-timm", victims=victims)
evaluator.evaluate()
##### For victim models that do NOT exist in timm
victims = [{"model_name": "your_preferred_model", "model": model, "preprocessing": transforms},]
evaluator = tabench.Evaluation(data_dir="/path/to/adv/examples", mode="custom-custom", victims=victims)
evaluator.evaluate()
```
> A perfect fit for the B&D track.
**A:** We appreciate the recognition of the value of this work. Our work develops a new combination of input augmentation and optimizer techniques, which outperforms all other sorts of methods and can be considered as a strong optimization back-end for developing new transfer-based attacks. Moreover, as recognized by most reviewers, our paper delivers many novel results and insights, including 1) it is easier to transfer from vision transformers to convolutional networks than from the opposite direction, 2) it is essential to evaluate on a variety of substitute and victim models to gain a comprehensive understanding of the performance of a transfer-based method, _etc_. These insights are also expected to contribute to the development of future work in the field of adversarial machine learning and inspire effective methods for generating adversarial examples to evaluate the robustness of DNNs. Thus, we consider it suitable not only for the benchmark and datasets track but also for the audience of the main track.
> Taxonomy and categories of methods.
**A:** We categorize transfer-based attacks based on their commonalities.
* **Input augmentation and the optimizer.** Inspired by the evaluation of some newly developed computer vision architectures (as discussed in lines 217-226), we put "input augmentation" and "optimizer" methods together to evaluate how empirical evaluation of the other sorts of methods can be biased with less optimal augmentation and optimizers. These methods are generally not specific to substitute model architectures and also do not require training data, unlike the other methods.
* **Gradient computation.** The methods in this category all attempt to improve the transferability of adversarial examples by modifying the loss or the backpropagation process.
* **Substitute model training.** The methods in this category all developed principled substitute model training/fine-tuning strategies.
* **Generative Modeling.** These methods advocate training a generative model first and then generating adversarial examples.
We feel that it is challenging to further divide these methods into more granular categories. Most of the methods that are "feature attack" (suggested by *Reviewer Yj8S*) can also be considered as discarding higher layers of the substitute model (_e.g._, NRDM, ILA, and ILA++), thus can also be considered to be related to the "network structure" category (suggested by *Reviewer Yj8S*). Methods like LinBP not only modify architectures of the substitute models but also operate only after a middle layer, which can be seen to modify the gradient with respect to the "features" and is also related to "feature attack". We have talked to the authors of some of these works about the taxonomy and found it difficult to divide them into granular categories.
> Defense methods and targeted attacks.
**A:** Although it can be insightful to include defensive models in the benchmark, considering that in our experiments each victim model is also testified as a substitute model (the necessity of such a setting is discussed in Section 4.3), the computational complexity scales quadratically with the number of models, thus we only include some popular models in practice in the first version. After submission, we tried evaluating the performance of different methods in attacking 3 defensive models obtained via adversarial training, _i.e._, a robust ConvNext-B, a robust Swin-B, and a robust ViT-B-CvSt. They are all collected from RobustBench [1] and exhibit high robust accuracy against AutoAttack. The results are given in the attached PDF and will be added to our paper.
As for targeted attacks, unfortunately, **most existing transfer-based attacks** were still developed in an untargeted setting. It is nontrivial to adapt them to the targeted setting and ensure their optimality. Thus, in order to compare these methods fairly in the first place, we think it is more reasonable to stick with the same untargeted setting in the first version of our benchmark. Also, as mentioned by *Reviewer Yj8S*, the targeted attack is more like a future direction, and we will test it, once appropriate.
We will discuss these points in the paper following the suggestion from *Reviewer fXNW*.
[1] Francesco Croce et al. RobustBench: a standardized adversarial robustness benchmark. In arXiv 2020.
Pdf: /pdf/35b7825099dca3eb594f013a8a304569de7bf340.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper is a benchmark paper on transfer-based attack in the area of adversarial machine learning. The paper benchmarks 30+ methods on ImageNet, grouped into four principal categories: augmentation and optimizer; gradient computation; substitute model training, and generative models. The extensive evaluation results uncover several interesting novel insights. For instance, employing Vision Transformer (ViT) architectures in the training of substitute models can generally enhance the efficacy of transfer attacks. Additionally, the MoreBayesia method has consistently demonstrated an improved transferability of adversarial examples.
Strengths: 1. The evaluation is comprehensive, encompassing over 30 implemented and evaluated methods, thereby offering a thorough comparison of state-of-the-art transfer-based attacks.
2. Additionally, this benchmark investigates the robustness of transfer-based attacks on ViT, an aspect often overlooked in other papers.
3. The evaluation results provide novel insights into the design of more potent transfer-based attacks.
Weaknesses: The paper conducts an extensive and systematic study on transfer-based attacks, comparing state-of-the-art approaches. However, one concern is the absence of significant novel insights. While the paper does offer interesting findings, such as the improved performance of transfer attacks when using ViT as substitute models, it could benefit from additional unique insights to meet the standards expected at NeurIPS.
Furthermore, the paper lacks a discussion on other types of black-box attacks, despite transfer-based attacks being a specific form of such attacks. It would be valuable to include a discussion on related work concerning different black-box attack techniques, highlighting the similarities and differences between various approaches. This would provide a more comprehensive understanding of transfer-based attacks in the broader context of black-box adversarial attacks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the positive feedback. Our responses to the comments are given as follows.
> One concern is the absence of significant novel insights.
**A:** The novelty of our paper shows in several different aspects. First of all, our work develops a new combination of input augmentation and optimizer techniques, which surprisingly outperforms all other sorts of methods, and it can be considered as a strong and useful optimization back-end for developing new transfer-based attacks. Moreover, as recognized by many other reviewers, our paper delivers many novel results and insights, including 1) it is easier to transfer from vision transformers to convolutional networks than from the opposite direction, 2) it is essential to evaluate on a variety of substitute and victim models to gain a comprehensive understanding of the performance of a transfer-based method, _etc_. These insights are expected to contribute to the development of future work in the field of trustworthy machine learning and inspire effective adversarial machine learning methods.
> Despite transfer-based attacks being a specific form of such attacks, it would be valuable to include a discussion on related work concerning different black-box attack techniques.
**A:** Thanks for the suggestion. The suggested discussions about attacks other than transfer-based ones will be added to the paper.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thank the authors for the rebuttal. After going through the author's rebuttal and other reviews, some of my concerns are addressed, so I decided to increase my rating to weak accept. The reason that I don’t further increase my rating is that more interesting findings or insights can be expected.
---
Reply to Comment 1.1.1:
Title: Thanks to the reviewer
Comment: Dear Reviewer xpVS,
It's excellent to know that your concerns have been addressed! We value your suggestion and will delve into more interesting findings and insights in future work.
Best regards,
Authors | Summary: In this paper, the authors present a new transfer-based attack benchmark (TA-Bench) to evaluate the transferability of adversarial attacks. TA-Bench implements 30+ adversarial attacks with 10 substitute models and introduces more advanced optimization back-ends that incorporate augmentation and different choices of optimizers. The benchmark provides a means to compare different adversarial attacks systematically, fairly and practically. Evaluation results bring new and interesting insights about existing attacks.
Strengths: 1. This is a solid paper with clear motivation and comprehensive experimental results.
2. The proposed TA-Benchmark provides a systematical, practical and fair way to compare different adversarial attacks.
3. TA-benchmark covers attacks with various mechanisms, including the implementation of many more recent and advanced attacks.
4. The authors bring new insights about existing adversarial attacks such as choices of augmentation and optimizer have impacts on effectiveness and transformers are better substitute models for gradient computation-based attacks.
5. The paper is well written and easy to follow. I enjoyed reading this paper.
Weaknesses: 1. The codebase is not provided (maybe for anonymity). Thus it is hard to evaluate the accessibility, portability, scalability and usability of the benchmark. For example, how difficult would it to evaluate a new attack using TA-bench? Is there any interface for users to quickly implement their own attacks or select their own substitute models?
2. Since the main contribution of this work is providing a fair and practical benchmark to evaluate different attacks. It is worth discussing the standards and metrics for the evaluation. However, I don't find an explicit explanation of such standards and metrics.
3. The adversarial perturbations ($\epsilon$) are set to be fixed in the experiments. However, attackers may use different $\epsilon$ in practice.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please see weaknesses for details.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the positive feedback. Except for the question about codebase which is answered in our general response, all comments are replied to as follows.
> It is worth discussing the standards and metrics for the evaluation. However, I don't find an explicit explanation of such standards and metrics.
**A:** The metrics and standards are introduced in lines 167-176 in our paper. To assess the transferability of adversarial examples, we first evaluated the prediction accuracy of all victim models given adversarial examples generated on a substitute model. Based on the prediction accuracy of each victim model, we further calculate the average accuracy (AA) of all models. Moreover, the benchmark also calculates average AA (AAA), worst AA (WAA), and best AA (BAA) on all choices of substitute models, and they are reported for almost all experiments in our paper. Taking the prediction accuracy instead of the success rates as a base metric makes it possible to add more substitute models in the future, as calculating the success rates in principle requires images that could be correctly classified by all substitute models to generate adversarial examples.
> The adversarial perturbations ($\epsilon$) are set to be fixed in the experiments. However, attackers may use different $\epsilon$ in practice.
**A:** We follow the setting of using $\epsilon=8/255$ for $\ell_\infty$ attacks and $\epsilon=5$ for $\ell_2$ attacks, as in previous papers. We will consider performing additional evaluations using other $\epsilon$ values if they are commonly adopted.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal. Given the strengths of the paper, I tend to keep my rating. However, I still want to see the performance against different perturbations (e.g, $\epsilon$ = 16/255 for $l_{\infty}$ and 2 for $l_2$).
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer EHYy,
Thanks for recognizing the strengths of our paper. Experiments on the suggested $\epsilon$ values are currently ongoing. We will provide some preliminary results as soon as they are available, which will likely be in several days.
Best regards,
Authors
---
Reply to Comment 1.1.2:
Comment: Dear Reviewer EHYy,
We have obtained the results under the $\ell_\infty$ constraint with $\epsilon$=16/255, using our new optimization back-end. The conclusions are similar to those from the lower half of Table 1 in our paper. The best attack performance in the sense of BAA is still achieved by applying SE on the DeiT-B substitute model, resulting in the victim models showing an average accuracy of only 4.36%. PNA leads to the lowest WAA among all, which is 10.34%. For the "substitute model training" methods, MoreBayesian still outperforms the other methods, successfully fooling the victim models to show an average accuracy of only 6.59% when using a ResNet-50 substitute model. More detailed results will be added to our paper, and the results for $\ell_2$ attacks with $\epsilon$=2 will also be added.
Best regards,
Authors | Summary: In this paper, the authors have presented benchmark for transfer-based attacks, in which they have implemented 30+ advanced transfer-based attack methods, including those focus on augmentation and optimizer innovation, those “gradient computation” methods, those “substitute model training” methods, and those applying generative modeling. And by evaluating and comparing transfer-based attacks systematically, they have some new insights.
Strengths: 1. This paper is the first transfer-based attack benchmark, it will help people in this field.
2. The authors have implemented 30+ methods, including augmentation, optimizer, gradient computation, substitute model training, and generative model. Various methods make sure its fairness and comprehensiveness.
3. It is helpful that the results of the experiment show some beneficial conclusions.
Weaknesses: 1. There is only one type of dataset used in the experiment, which makes results lack credibility.
2. This paper mainly evaluates transfer-based methods on the base of two back-ends, and the results are quite different. It lacks more explanation.
3. The authors have maked a bilinear interpolation and resize in the pre-processing, which has an influence on following evaluations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why is attack performance of ILA, ILA++, and NAA using I-FGSM quite different with that using new back-end?
2. The analysis of Figure 3 in Section 4.3 is too short to make readers understand why the most effective factor of these methods may be input augmentation and gradient averaging.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the positive feedback. Our responses to the comments are given as follows.
> There is only one type of dataset used in the experiment, which makes results lack credibility.
**A:** We focus on ImageNet first for several reasons. First of all, almost all papers studying transfer-based attacks **developed** and **evaluated** their methods on ImageNet. Only a few papers conducted evaluations on smaller scale datasets, such as CIFAR-10, as well. This is because ImageNet contains large-scale diverse images, making the data distribution more representative. We ensure a fair comparison on ImageNet first to benefit understanding what's effective. Another reason why we focus on ImageNet first is that vision transformers generally require training on ImageNet, and the dataset offers more options for choosing substitute/victim models, enabling a more comprehensive understanding of the performance of an attack, as discussed in Section 4.3. After consolidating all results on ImageNet, we will consider performing evaluations CIFAR-10, too.
> This paper mainly evaluates transfer-based methods on the base of two back-ends, and the results are quite different. It lacks more explanation. Why is attack performance of ILA, ILA++, and NAA using I-FGSM quite different with that using new back-end?
**A:** The advanced optimization back-end (_i.e._, UN-DP-DI$^2$-TI-PI-FGSM) is developed by combining effective methods in the category of "augmentation and the optimizer." Different results achieved using UN-DP-DI$^2$-TI-PI-FGSM, compared to the results on the typical I-FGSM back-end, come from the fact that, in certain methods, the key points for improving the transferability overlap with those in the methods used in UN-DP-DI$^2$-TI-PI-FGSM. For example, NAA achieves the best AAA with the I-FGSM back-end, while it fails to maintain its edge when UN-DP-DI$^2$-TI-PI-FGSM is adopted. A very recent paper [1] has analyzed NAA from the gradient alignment perspective, and it claims that NAA resembles changing the intermediate-level features of the adversarial example into that of some randomly augmented benign examples. This can be similarly achieved by random input transformation methods, leading to less effective performance when combining them together. The same reason may also lead to unsatisfactory performance of some other methods. More analyses will be given in an updated version of the paper.
> The authors have maked a bilinear interpolation and resize in the pre-processing, which has an influence on following evaluations.
**A:** We would like to stress that the bilinear interpolation and resize operations mentioned in lines 161-164 are only for the ResNet-50 victim, and it is actually the official implementation of the model. We strictly adhere to the official pre-processing pipeline for each model (as mentioned in lines 159-161).
> The analysis of Figure 3 in Section 4.3 is too short to make readers understand why the most effective factor of these methods may be input augmentation and gradient averaging.
**A:** We would like to provide more analyses in the paper. In general, since these methods all apply random augmentation and gradient averaging, and there lacks comparison to a baseline that uses the same random augmentation and gradient averaging strategies, we conducted experiments to ensure such a fair comparison (Figure 3). The inferior performance of these methods compared to the baselines indicates that the random augmentation and gradient averaging approach contributes the most.
[1] Qizhang Li, et al. Improving Adversarial Transferability by Intermediate-level Perturbation Decay. In arXiv 2023.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Dear authors,
Thanks for your response. Your response addresses most of my concerns. Thus, I am inclined to give weak accept.
---
Reply to Comment 1.1.1:
Title: Thanks to the reviewer
Comment: We would like to thank the reviewer for responding to our rebuttal. It's great to know that most of your concerns have been addressed! | null | null |
Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex | Accept (poster) | Summary: The paper discusses the observation that improved imagenet classification performance is no long correlated with neuron prediction performance in macaque IT.
To validate this claim, the authors perform experiments where an image was moved across the visual field while the monkey maintained fixation. This process allowed the authors to examine the relative importance of the spatially distributed features in an image.
The authors further compared against a set of architectures from pytorch timm, including those trained on imagenet, self-supervised models, and those trained on internet-scale datasets. Both CNNs and ViT-like models were tested.
The authors find that aligning the gradients of a model with human data improved the prediction performance of DNNs for IT neurons.
In the supplementary, the authors show that their stimuli is better aligned to imagenet, and thus their results cannot be explained by the dataset misalignment.
Strengths: The paper is interesting and timely.
With the widespread availability of datasets with hundreds of millions to billions of images (YFCC, Conceptual Captions, Datacomp-1B, LAION-2B), models have repeatedly achieved higher performance on zero-shot imagenet. Researchers building encoding models have increasingly embraced these models as backbones.
The paper shows that imagenet performance for a variety of different models is no longer strictly correlated with brain predictive performance.
The idea of shifting images while maintaining eye fixation is an interesting way to find biological importance of different features.
Weaknesses: * On DNN fitting
* The paper provides few details on how exactly the images were presented to DNNs. For the monkey presentation, fixation was maintained and the images were shifted. Line 141 onwards provides an explanation, however it is very unclear how exactly you perform this step.
* On the models used
* I find the division of models into CNN/Transformer/Robust/Self-supervised/DNN extra data to be confusing. For example CLIP models would presumably fall under DNN extra data. However CLIP models have both CNN and ViT variants, how are models classified?
* I suggest the authors use a mutually exclusive division by architecture (CNN/ViT/MLP-mixer etc), loss or dataset (supervised ImageNet, supervised large dataset e.g. CLIP on the billion+ images, self-supervised using patch/rotation/other prefix tasks)
* This applies to Figure 1 and Figure 3, as well as discussion within the text.
* On Neural harmonizer
* The section starting from Line 116 provide very sparse details for how they actually train their neural harmonizer
* What is $g$? Is this the gradient operator? What is the gradient with respect to? Is it to the mean of the neurons in IT?
* Are you performing a second-order gradient optimization?
* On the feature importance figures
* How exactly are you plotting feature importance for DNNs? What approach are you using? GradCAM? In text you mention CRAFT in line 215, however it is not clear if this is the approach you actually use for the visualizations
* Presumably you don't use CRAFT for the neural harmonizer? If so, please clarify what you use for the visualization and the neural harmonizer.
* On imagenet accuracy
* Was imagenet accuracy top-1 accuracy? Was this used dataset ImageNet-1k? ImageNet-21k? ImageNetv2?
* How was accuracy for contrastive models computed? Were they zero-shot probes in the style of "a photo of a x" as in the CLIP paper?
* Minor typos and grammatical errors
* Line 58 -> "tjat"
* Line 119 -> "let $P$ be a function that a multi-scale Gaussian pyramid of a human feature importance map ...", this line is super confusing.
* Supplementary code "realising" -> "releasing"
* On related work
* I recommend the authors discuss at least one of the following papers:
* [1] which show how most DNNs trained on visual-language contrastive losses cannot perform compositional reasoning
* [2] which discusses how recognition and retrieval tasks, and the dataset used to train these tasks lead to compositional features not being emphasized in DNNs
* [3] which discusses how DNN initializations yield representational gaps between vision and concepts
The paper would be strengthened with additional details and clarifications.
[1] Thrush, Tristan, et al. "Winoground: Probing vision and language models for visio-linguistic compositionality." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[2] Yuksekgonul, Mert, et al. "When and Why Vision-Language Models Behave like Bags-Of-Words, and What to Do About It?." The Eleventh International Conference on Learning Representations. 2022.
[3] Liang, Victor Weixin, et al. "Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning." Advances in Neural Information Processing Systems 35 (2022): 17612-17625.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See weaknesses section for other questions.
For figure 4, did you actually present upside down images to the monkeys? Or were they only used for the training of the neural harmonizer?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors largely address the limitations of their paper. However the authors could potentially emphasize that their approach applies to data collected from electrophysiology from macaque monkeys. Recent work on human fMRI data suggests that contrastive optimized DNNs (CLIP) are better encoding models of the human visual cortex than fully supervised models (imagenet).
[1] Conwell, Colin, et al. "What can 5.17 billion regression fits tell us about artificial models of the human visual system?." SVRHM 2021 Workshop@ NeurIPS. 2021.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The paper provides few details on how exactly the images were presented to DNNs. For the monkey presentation, fixation was maintained and the images were shifted. Line 141 onwards provides an explanation, however it is very unclear how exactly you perform this step.
We apologize for the lack of clarity. We presented every image shown to a monkey to a DNN and then extracted the feature map at a given layer. Then, a feature map patch was extracted at each of the monkey’s receptive field locations. These feature map patches were used to determine neural predictivity. We will revise our description in the main text with these details. We also included a link to our code with an implementation of this procedure on line 276.
> I find the division of models into CNN/Transformer/Robust/Self-supervised/DNN extra data to be confusing. For example CLIP models would presumably fall under DNN extra data. However CLIP models have both CNN and ViT variants, how are models classified?
Thank you for the suggestion! We have split the DNN extra data category into CNN extra data and Transformer extra data (Rebuttal Figs A and B). CLIP-ResNet falls into the former group and CLIP-Transformer falls into the latter group. We hesitate adding additional divisions as it will clutter the figures, but please take a look at our revised figures and let us know what you think.
> The section starting from Line 116 provide very sparse details for how they actually train their neural harmonizer. What is g? What is the gradient with respect to? Are you performing a second-order gradient optimization?
We apologize for the oversight! Here, g(.) refers to any attribution method (heatmap method), and indeed, we utilize gradients. The loss is first differentiated with respect to the input, resulting in df(x)/dx (yielding a heatmap). Next, we compute the derivative of the difference between this heatmap and the desired heatmap with respect to our model's weights. This involves a mixed partial derivative process. We'd like to clarify that even with ReLU activations, the gradient of the loss is not zero (a trivial case to comprehend is calculating the closed-form of the gradient of the loss for a dense ReLU network). Therefore, the optimization occurs just as with any other training process. We will incorporate this explanation into our revision.
> How exactly are you plotting feature importance for DNNs? What approach are you using? GradCAM? In text you mention CRAFT in line 215, however it is not clear if this is the approach you actually use for the visualizations
We plotted predicted and actual neural responses at every receptive field location in an image (see Figure 4 caption). These are not attribution maps. We will clarify this in the main text. We also applied CRAFT to harmonized and non-harmonized ResNet50 models. CRAFT provides a complementary insight: what are the actual features (as opposed to an activity heatmap) that each model thinks are driving IT responses? CRAFT shows that harmonized models predict that face features are more important for explaining neural responses, whereas nonharmonized models are less selective for face features.
> Was imagenet accuracy top-1 accuracy?
We plotted DNN top-1 accuracy on ILSVRC12/ImageNet-1k. Some of the models in our zoo were trained on data beyond just ImageNet (CNN/Transformer extra data, original submission Figs. 3/4 and rebuttal figs A/B). We will clarify in the revision.
> How was accuracy for contrastive models computed? Were they zero-shot probes in the style of "a photo of a x" as in the CLIP paper?
As noted in our Methods section, nearly all models in our zoo were taken from TIMM, and the rest are from the Neural Harmonizer. The CLIP models were taken from TIMM, where they were finetuned on ImageNet-1k.
> Minor typos and grammatical errors
Thank you, these were fixed.
>On related work, I recommend the authors discuss at least one of the following papers: [1] which show how most DNNs trained on visual-language contrastive losses cannot perform compositional reasoning, [2] which discusses how recognition and retrieval tasks, and the dataset used to train these tasks lead to compositional features not being emphasized in DNNs, [3] which discusses how DNN initializations yield representational gaps between vision and concepts.
Thank you for these references. We will include them in our discussion.
> For figure 4, did you actually present upside down images to the monkeys?
Yes the monkeys actually saw the images as presented in Figure 4. We will clarify this point in an expanded methods section in the appendix of our submission.
> The authors largely address the limitations of their paper. However the authors could potentially emphasize that their approach applies to data collected from electrophysiology from macaque monkeys. Recent work on human fMRI data suggests that contrastive optimized DNNs (CLIP) are better encoding models of the human visual cortex than fully supervised models (imagenet).
Thank you for this comment. As discussed in our main rebuttal, we will add a discussion of how model-based work on fMRI relates to our findings in our manuscript.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response, they have indeed clarified my questions. I have decided to retain my original score. | Summary: This work address an important question in the construction of computational models of object recognition: the increase in performance of recent DNN models is not anymore accompanied (like in the past) by an increase in their ability to predict neural responses. This is a very relevant problem for the advancement of theoretical neuroscience and its medical applications.
The Authors make extensive analysis of this detachment, using the Brain-Score metrics, and investigate two possible sources of this phenomenon, based respectively on the data and the architectures employed.
Crucially, they employed a dataset with realistic stimuli and with spatial information that can be connected with the neural activity.
Finally, the Authors introduce a technique called "neural harmonizer" that allows to partially align human and machine responses, and is suitable to generate interpretable hypothesis about the features that drive neural responses.
Strengths: The paper is exceptionally well written and clear. It addresses a very relevant problem in computational neuroscience and its applications. It provides a convincing evidence that the neural harmonizer technique is effective in aligning human and DNNs responses, thus providing a base for important applications in prosthetics, reducing the need for animal experimentation. The originality comes from the combined action of using a new dataset, with spatial information, the usage of the neural harmonizer (that has already been used to align DNNs with human perceptual data) and extensive experimentation that gives strength to the conclusions.
Weaknesses: I honestly can't find any that has not been already discussed in the Limitations section.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Is it possible that, as a speculative question, the mismatch between the different features learned by DNNs and IT can be partially accounted for by the usage of backpropagation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations are critically discussed in the dedicated section, and the broader impacts are also addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Is it possible that, as a speculative question, the mismatch between the different features learned by DNNs and IT can be partially accounted for by the usage of backpropagation?
Fantastic question. As we wrote in our discussion, we believe that a wholesale revision of DNN training routines may be necessary for improving predictions of image-evoked neural responses in IT. In our paper we show that aligning DNNs with human behavior can partially achieve this goal, but we believe that great progress will be made in the future by identifying principles that could yield better predictions of neural activity without the need to co-train on human behavior. In our discussion we discuss opportunities for designing better datasets and objective functions for doing this, but more biologically-plausible learning algorithms are another approach that we will mention in our revision. | Summary: This work summarizes the trend in DNN models of biological vision that networks that perform better on imagenet no longer necessarily provide better fits to neural data. It also shows neural-harmonized models do provide better fits to a data of mostly face-selective neurons.
Strengths: Neural harmonizing makes a substantial impact on neural fit.
Understanding what makes DNNs stop fitting neural data is an important problem
The use of a more naturalistic color images is well-motivated
Weaknesses: The framing of the paper (particularly the title) suggests that this paper is making a novel claim about DNN performance, when in fact this claim is based on a re-plot of BrainScore data and has been made before (in ref 11 and here: https://www.biorxiv.org/content/10.1101/688390v1). While it is good to have a replication of this finding, the substantial novel contribution of this paper is to show that neural harmonizing increases the match to neural data, so the title and framing should reflect that.
The measure of relevant features is too anecdotal as presented currently to support the claims (see below)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: line 58 has a typo
The abstract says "Our results suggest that harmonized DNNs break the trade-off between ImageNet accuracy
and neural prediction accuracy that assails current DNNs and offer a path to more accurate models of biological vision. " I'm not sure what tradeoff is being referred to here. Is it that higher accuracy leads to worse neural prediction? That tradeoff is not broken by using the harmonizer. The harmonized models have better predictivity but not better Imagenet performance.
Figure 4 shows feature relevance maps for example images, but without seeing this quantified across a large number of images, not much can be taken from this. The authors also make reference to models paying too much attention to background, etc, but again this is not measured in any objective way and seems to just be based on looking at these examples. Can the authors quantify these claims and show they hold across large populations of images?
In fig 5, why do many of the features have so little relative importance? Are these meant to be the top features? Also, why in the lower plot are there 4 colored bars but only three colored boxes? The axes are also different for these plots. Is there any significance to the fact that the relative importance for features from the harmonized model is half that of the resnet model? What else explains the responses and do those other features look like those from the unharmonized model? In total, I don't feel I can take any strong results away from this plot as-is.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors address limiations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The framing of the paper (particularly the title) suggests that this paper is making a novel claim about DNN performance, when in fact this claim is based on a re-plot of BrainScore data and has been made before (in ref 11 and here: https://www.biorxiv.org/content/10.1101/688390v1). While it is good to have a replication of this finding, the substantial novel contribution of this paper is to show that neural harmonizing increases the match to neural data, so the title and framing should reflect that.
Thank you for the comment. We have addressed this in the main rebuttal. We are copying the response below for your convenience. Please let us know if you have any further questions, concerns, or points for clarification about this.
The correlation between the accuracy of DNNs on object recognition tasks like ImageNet and their ability to predict IT responses has been used as evidence for long-standing theories in vision science, such as core object recognition [1] (our title is an homage to this paper). As we try to emphasize in our submission, this correlation has not only weakened in recent years [2] for nonhuman primate electrophysiology, it has begun to progressively **worsen** as DNNs improve on ImageNet — especially over the last four years as DNNs have begun to dramatically increase in scale. We will clarify that it is this worsening-with-accuracy neural alignment of DNNs that we believe is truly novel (and alarming!). We will adopt the reviewers’ suggestions for clarifying that the results scraped from the Brain-Score website are potentially an “observation by many” (even if it is not yet a published finding) vs. the data we introduce and model is a “finding by the authors”.
The reviewers note that similar issues with performance optimization as we describe have been discussed in the human fMRI literature [3]. We will include a section on human fMRI in our revision as well as an elaboration on the important differences regarding inferences about neural computations based on electrophysiology (as we do) vs. fMRI (which at best offers a very slow and indirect readout of neural populations [4]).
> I'm not sure what tradeoff is being referred to here... That tradeoff is not broken by using the harmonizer. The harmonized models have better predictivity but not better Imagenet performance.
We apologize for the lack of clarity. As non-harmonized models have improved in ImageNet accuracy, they have become progressively worse at predicting neural responses. In contrast, we see a mostly significant linear trend between the ImageNet accuracy and neural prediction accuracy of harmonized DNNs (Monkey 1, PL: $\rho = 0.37$, $p < 0.01$, Monkey 2, PL: $\rho = 0.23$, $p < 0.05$, Monkey 1, ML: $\rho = 0.15$, $n.s.$). Moreover, nearly all harmonized DNNs outperform their nonharmonized baselines on ImageNet accuracy. We will clarify this in the manuscript.
> Figure 4 shows feature relevance maps for example images...
We apologize for the confusion. The results in Figure 3 of our submission depict the average correlation between spatially-resolved predictions of neural activities across images and the ground truth responses. In Figure 4 we plot maps of model predictions and true neural responses for several images. These are not saliency/gradient/feature attribution maps commonly used in explainable AI, but actual predictions and ground truth of the neural activity evoked by different regions of images. Our subjective interpretations of these spatial maps are that non-harmonized DNN responses are driven more by background features than harmonized models. We will clarify.
> Questions about Figure 5.
We used CRAFT [5] to identify the primary features relied on by harmonized/unharmonized ResNet50s to predict IT image-evoked responses. To do this, CRAFT first computes non-negative matrix factorization to find features, and then total Sobol indices to measure the relative importance of each feature. Total Sobol indices provide a score representing the proportion of variance attributed to each individual concept and its interactions. These indices do not always sum to 1 because of how feature interactions are accounted for by total Sobol indices. For example, the importance of Feature 1 includes an interaction between Feature 1 and Feature 2, which is also captured within Feature 2. This leads to redundancy, which explains why the total Sobol indices can range between 0 and 1 without always summing to 1. We will revise our explanation of this procedure for clarity.
We also used CRAFT to find image patches that depict features of various importance. The bar plot colors in Fig. 4 denote relative importance features described by image patches on the left. (Note that we inadvertently highlighted 4 bars for unharmonized models; this should be 3.) CRAFT thus offers a principled and qualitative explanation of what features drive models' neural activity predictions.
> line 58 has a typo
We fixed this. Thanks!
[1] Yamins, D.L.K., Hong, H., Cadieu, C.F., Solomon, E.A., Seibert, D., DiCarlo, J.J.: Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc. Natl. Acad. Sci. U. S. A. 111(23) 307 (June 2014) 8619–8624
[2] Schrimpf, M., Kubilius, J., Hong, H., Majaj, N.J., Rajalingham, R., Issa, E.B., Kar, K., Bashivan, P., 327 Prescott-Roy, J., Geiger, F., Schmidt, K., Yamins, D.L.K., DiCarlo, J.J.: Brain-Score: Which artificial neural network for object recognition is most Brain-Like? (January 2020)
[3] Jozwik, K., Schrimpf, M., Kanwisher, N., Dicarlo, James. 2019. To find better neural network models of human vision, find better neural network models of primate vision. BioRxiv.
[4] Heeger, D., Ress, D. 2002. What does fMRI tell us about neuronal activity? Nature reviews neuroscience.
[5] Fel, T., Picard, A., Bethune, L., Boissin, T., Vigouroux, D., Colin, J., Cadène, R., Serre, T. 2023. CRAFT:
Concept recursive activation FacTorization for explainability. CVPR.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response and clarifications. They did not address my concern about the interpretation of Figure 3 being based on limited/subjective data. It should also be noted that ref [6] in the general response notes a negative correlation between performance and fit (i.e., the idea that better imagenet models are worse brain models is noted there). I will leave my score as-is.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thanks so much for the response. Sorry also about any confusion. Let us clarify:
> They did not address my concern about the interpretation of Figure 3 being based on limited/subjective data.
In the original response, the reviewer said, **Figure 4 shows feature relevance maps for example images, but without seeing this quantified across a large number of images, not much can be taken from this.** Are you referring to Figure 3 or 4 here?
Figure 3 shows the average + error bars of the correlations between each models' predictions of neuronal activity for images and the actual neuronal responses to those images. As discussed in the text, there are highly statistically significant differences between models. In our response to X5JQ we also noted that the correlations of harmonized model neural predictivity vs. ImageNet accuracy are also significantly positive for 2/3 monkey/area combinations. Thus, the dataset is statistically powered enough for hypothesis testing. Figure 4 just shows what these predictions look like. Can you please clarify what is subjective here, and what is making you hesitate in improving your review?
> It should also be noted that ref [6] in the general response notes a negative correlation between performance and fit...
Ref [6] from your response indeed shows a negative correlation between ImageNet accuracy and "human IT predictivity." But as we mentioned in our response, these are results reflect human fMRI data not electrophysiology data (like our paper features). The authors' of that paper also use representational dissimilarity (RDM) to measure the similarity between models and fMRI activity, whereas we measure the correlation between model predicted and real image-evoked neuronal activity.
There is no guarantee that results in fMRI will generalize to electrophysiology, and in fact they often do not (see ref [4] from our response). As mentioned in our response, we thought that adding an fMRI section into the discussion of our paper would be a good compromise, to show that while some have found that fMRI fits vs. ImageNet accuracy have decreased over the years (like in ref [6] from your response), others like [1] below have found that is not always the case when using fMRI and RSA. In contrast, in our manuscript we report that six different electrophysiological recordings of nonhuman primate show the same increase-then-decrease of ImageNet accuracy vs. neuronal prediction accuracy.
[1] Conwell C., Prince J., Kay K., Alvarez G., Konkle T. 2023. What can 1.8 billion regressions tell us about the pressures shaping high-level visual representation in brains and machines? BioRxiv.
Thank you again for engaging and we hope to continue this discussion! | Summary: This paper investigates the general finding that modern deep neural network architectures have worse predictions of primate IT cortex, even though they perform better at object recognition. The authors investigate a set of IT recordings that incorporate spatially resolved population maps, and show that the best fitting units of modern DNNs have different spatial activity than the neuron they are predicting. The authors investigate a set of models that are trained with a “neural harmonizer” that helps align the model responses with human behavior and find that this training significantly improves the neural predictivity for many different architectures and also results in more similar spatial activation maps.
Edit after rebuttal period: I read the author’s rebuttal and resulting discussion about my concerns. Specifically, it was good to see the preliminary results showing that the harmonized models do not impair the predictivity for the public BrainScore IT neural dataset. These results better contextualize the claims by the authors. I would hope that the full analysis can be completed on the rest of the datasets but even just adding this preliminary data contextualizes their claims, and given this I updated my score.
Strengths: * The authors analyze neural predictivity with a dataset that has not been extensively studied before, and analyze a large number of candidate neural networks varying in architecture and training procedure.
* The authors investigate models trained on behavioral data in addition to ImageNet. Although these models were presented last year at NeurIPS, to my knowledge, their evaluations on neural data had not been reported, and the increase in predictivity with the behavioral data as a regularizer is impressive. I think makes the contribution novel enough even if the neural harmonizer methodology was presented before, as very few models have explicitly attempted to match precise human behavioral results as a way of increasing neural alignment.
* Even though it is generally known that with modern models improved ImageNet performance does not lead to better brain predictions (further discussed in Weaknesses below), the full investigation and quantification of this in this paper is potentially beneficial to the field as a way to formally state that this trend is no longer the case.
* The analysis of spatially-mapped neural responses is novel, and the breakdown of which parts of the image are most important for explaining the neural data is interesting.
Weaknesses: 1) The authors present the lack of a correlation between neural predictivity and brain responses for modern models as a new finding, however this is a generally known phenomena mentioned in various papers as motivation for developing better metrics of similarity. Additionally, one can see this is the case by simply looking at the brain-score website. The authors include a footnote that in [11] this was mentioned, but it might be more appropriate to reframe the beginning of the abstract and the introduction so that this is less of a “finding by the authors” and more of an “observation by many in the field”.
2) The authors state that “task optimization” is insufficient for reverse-engineering IT, however the only “task” that they consider is ImageNet. There are many other “tasks” that could be considered to train neural network models that are different from ImageNet (for instance including auxiliary tasks) and may lead to better predictions, so this seems like too strong of a claim.
3) Similarly the claim that DNNs *need* biologically-aligned training routines seems too strong. This is one way of getting to improved predictivity, but it may not be the only way (and in fact, the predictions are still well below the noise ceiling).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: a) Can results from the brain-score benchmarks (Figure 1) be included for the neural harmonizer models that are showed in Figure 3? These datapoints would further support the claims, especially given that the neural data analyzed is primarily from face-patches. If the trend does not hold in those datasets how does this change the conclusions?
b) Is cross validation performed by holding out an entire image or by holding out patches? This potentially matters for interpretation, as some of the image statistics may be very similar for neighboring patches of the image.
c) I’m a little confused by the wording in lines 54-57. Is (i) about the training data and (ii) about the architecture? If so, could this be made explicit?
d) In line 149 it is stated that separate fitting procedures were performed for every layer of activities and the best layer fit is reported – was this best layer determined using a separate “validation” set of data and the reported neural predictivity score is from “test” data? If not, there are some “double dipping” concerns for this type of analysis.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors include sections on limitations and broader impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The authors present the lack of a correlation between neural predictivity and brain responses for modern models as a new finding, however this is a generally known phenomena mentioned in various papers as motivation for developing better metrics of similarity.
Thank you for this comment. We addressed this point in our main rebuttal, but to quickly reiterate: we have taken your comment to heart and will be more precise about the novelty of our contribution, and clarify which findings were scraped and potentially an observation by many vs. truly novel results from our dataset and modeling efforts.
> The authors state that “task optimization” is insufficient for reverse-engineering IT, however the only “task” that they consider is ImageNet.
This is a very important point, thank you! As discussed in the rebuttal, we expanded our analyses to include DNNs trained on other tasks and datasets (Taskonomy and Ecoset). None of these models are as accurate as ImageNet-trained or Harmonized models in predicting image-evoked responses in our recordings. With that said, we agree that there are many other potential tasks and datasets out there, and we will soften our claims as a result. In our revision, we will state that our harmonized DNNs are more accurate in predicting neural responses than any of the 135 ImageNet-trained, 19 Taskonomy-trained, or 4 Ecoset-trained DNNs that we tested.
> Similarly the claim that DNNs need biologically-aligned training routines seems too strong.
We completely agree that biologically-aligned training routines may not be sufficient to reliably predict neural responses. However, our findings indicate that alignment with human behavior is the current best approach to achieving this goal. We will soften our claims to make it clear that harmonization and biologically-aligned training routines are not the only way (and maybe not the best way) to build better models of neural function.
> Was [the] best layer determined using a separate “validation” set of data and the reported neural predictivity score is from “test” data?
Thank you for this question, which we addressed in our main rebuttal, and copy below. Please let us know if you have any further questions or concerns.
Our neural data fitting procedure followed the original Brain-Score [1] procedure precisely, “The final neural predictivity score for the target brain region is computed as the mean across all train-test splits.” In other words, we (and the standard Brain-Score) did not use a held-out validation set to select the most predictive layer for each model. We agree with the reviewer, however, that this approach is potentially flawed. To test for this possibility we redid our analyses with train/val/test dataset partitions and selected a layer for each model as whichever one achieved the best performance on the validation set. For this procedure, we once again held one image out for testing, but in this version of the analysis, we also held out half of the training data for layer selection (validation). Our results from this approach strongly correlated with the ones reported in our original submission, and our overall conclusions remain the same (Monkey 1 ML Rebuttal Fig. A, $\rho = 0.98$, $p < 0.001$, Monkey 2 ML Rebuttal Fig. B, $\rho = 0.97$, $p < 0.001$, Monkey 1 PL, $\rho = 0.98$, $p < 0.001$). In our revision, we will report results from these complete cross-validation analyses instead of the standard Brain-Score analyses.
> Is cross validation performed by holding out an entire image or by holding out patches?
We always tested on held-out images. In our cross-validation procedure described above, our validation set consisted of held-out patches. As this is just for validation, we do not believe it confounds interpretation.
> I’m a little confused by the wording in lines 54-57.
In those lines, (i) is about training data, objective functions, optimizers, learning algorithms (e.g., backpropagation or not?), and other choices that shape training that are unrelated to model architecture. (ii) Is strictly about model architecture, which can be induced with brain-like constraints, like normalizations and recurrence. We will rewrite these lines for clarity.
> Can results from the brain-score benchmarks (Figure 1) be included for the neural harmonizer models that are shown in Figure 3? These data points would further support the claims, especially given that the neural data analyzed is primarily from face-patches. If the trend does not hold in those datasets how does this change the conclusions?
As discussed in the Contributions section of our submission, we focused on IT recordings from [1] because, “[These] experimental images were significantly closer to the statistical distribution of images in ImageNet (Fig. S1.), unlike [IT recordings used in Brain-Score][2].” The publicly available IT data on Brain-Score.org is of responses to images that are out-of-distribution of ImageNet (Fig. 1a and Fig. S1; Fig. 1b and Fig. 1c feature scraped results of DNN scores on private image datasets). Because of the well-known sensitivity of DNNs to distributional shifts [3], we believe it will be difficult to interpret the results of harmonized DNNs on these data. For this reason, we prefer to focus on our dataset from [1] in this manuscript. With that said, we included a link to our codebase on line 276 to make our data and models available to the wider community for additional analyses.
[1] Arcaro, M.J., Ponce, C., Livingstone, M.: The neurons that mistook a hat for a face. Elife (June 2020)
[2] Majaj, N.J., Hong, H., Solomon, E.A., DiCarlo, J.J.: Simple learned weighted sums of inferior temporal neuronal firing rates accurately predict human core object recognition performance. J. Neurosci. 35(39) (September 2015) 13402–13418
[3] Geirhos, R., Medina Temme, C.R., Rauber, J., Schütt, H.H., Bethge, M., Wichmann, F.A.: Generalisation in humans and deep neural networks.
---
Rebuttal Comment 1.1:
Title: response to author rebuttal
Comment: Thank you for addressing my (and other reviewers) concerns and for making improvements to the paper based on these comments. The update with the choice of layers and the clarification about holding out images solidifies my confidence that the presented results make sense.
However, I don't follow the logic behind why distribution shifts would impact the interpretation of the neural harmonizer models on other datasets. If it is the case that the neural harmonization is dataset specific, then this is a major limitation of the method and should be addressed. For instance, if the result on the BrainScore datasets looks different than the result on the presented dataset, that in itself is interesting and, in my opinion, should be presented for completeness. After all, a good model of the brain should match responses to ALL images, and not just subsets. This seems like something that a lot of readers will be asking given that the paper starts out with an analysis from the BrainScore website (and one could submit the models to BrainScore to get the values for the private data). Primarily due to this, I am maintaining my original score.
---
Reply to Comment 1.1.1:
Title: Response
Comment: We totally agree that measuring out-of-distribution predictions is an interesting problem! We see it as a potentially critical difference between DNN models of visual perception and neural responses and actual biological vision. But we also believe that the first step towards rigorously testing out-of-distribution performance is having a foundation for within-distribution performance. Our datasets fill this surprising gap in the field, as the publicly available IT data on the Brain-Score website are out-of-distribution for ImageNet (Appendix Fig. 1 of our submission).
As a next step and for future work, we believe one or multiple papers can be devoted to analyzing within vs. out-of-distribution affects on neuronal response prediction. Doing this analysis rigorously requires systematic control over how out-of-distribution a neural dataset is vs. ImageNet, and relating that difference to prediction accuracy. Achieving this goal is not possible without gathering new data, but we agree that it is a fascinating direction and will add it to our Discussion as future work. Thanks!! | Rebuttal 1:
Rebuttal: # Response to all reviewers
We thank the reviewers for their extensive feedback. We are confident that we have addressed their main critiques, which we summarize below along with our responses (the relevant reviewers are in parentheses):
**(nyjr) How was the best layer for each model selected?**
> Our neural data fitting procedure followed the original Brain-Score [1] procedure precisely, “The final neural predictivity score for the target brain region is computed as the mean across all train-test splits.” In other words, we (and the standard Brain-Score) did not use a held-out validation set to select the most predictive layer for each model. We agree with the reviewer, however, that this approach is potentially flawed. To test for this possibility we redid our analyses with train/val/test dataset partitions and selected a layer for each model as whichever one achieved the best performance on the validation set. For this procedure, we once again held one image out for testing, but in this version of the analysis, we also held out half of the training data for layer selection (validation). Our results from this approach strongly correlated with the ones reported in our original submission, and our overall conclusions remain the same (Monkey 1 ML Rebuttal Fig. A, $\rho = 0.98$, $p < 0.001$, Monkey 2 ML Rebuttal Fig. B, $\rho = 0.97$, $p < 0.001$, Monkey 1 PL, $\rho = 0.98$, $p < 0.001$). We will report both versions of results in our revision.
**(nyjr, itup) The only task considered is ImageNet. Would other datasets and tasks yield different or better predictions of neural responses?**
> We have added results on the ability of DNNs trained on the Taskonomy dataset and task set [2] and also the Ecoset [3] naturalistic object categorization dataset and task. We took 19 DNNs pretrained on each task in the Taskonomy, and 4 DNNs trained for classification on ecoset, and applied the standard Brain-Score fitting procedure that we describe in our original submission Methods to derive prediction accuracy scores for each model. These models were far less effective at explaining neural activity than any of the ImageNet-trained models we provided in the main text (Rebuttal tables C and D). **To summarize, our human behavior-aligned “Harmonized” DNNs are more accurate at predicting neural responses than any other DNN dataset or task we tested.**
**(nyjr, X5jQ) How novel is the top-line finding that ImageNet optimization is no longer effective for systems identification of primate IT?**
> The correlation between the accuracy of DNNs on object recognition tasks like ImageNet and their ability to predict IT responses has been used as evidence for long-standing theories in vision science, such as core object recognition [4] (our title is an homage to this paper). As we try to emphasize in our submission, this correlation has not only weakened in recent years [5] for nonhuman primate electrophysiology, it has begun to progressively **worsen** as DNNs improve on ImageNet. We will clarify that it is this worsening-with-accuracy neural alignment of DNNs that we believe is truly novel (and alarming!). We will adopt the reviewers’ suggestions for clarifying that the results scraped from the Brain-Score website are potentially an “observation by many” (even if it is not yet a published finding) vs. the data we introduce and model, which is a “finding by the authors”.
> The reviewers note that similar issues with performance optimization as we describe have been appeared in the human fMRI literature [6, 7], although those papers describe older network architectures from before 2022. We will include a section on human fMRI in our revision as well as an elaboration on the important differences regarding inferences about neural computations based on electrophysiology (as we do) vs. fMRI (which at best offers a very slow and indirect readout of neural populations [8]).
We have provided detailed comments to each of the reviewers’ responses below. We hope you agree that our manuscript has improved through your feedback and that our findings will have a significant impact on Computational Neuroscience.
[1] Schrimpf, M., Kubilius, J., Hong, H., Majaj, N.J., Rajalingham, R., Issa, E.B., Kar, K., Bashivan, P., 327 Prescott-Roy, J., Geiger, F., Schmidt, K., Yamins, D.L.K., DiCarlo, J.J.: Brain-Score: Which artificial 328 neural network for object recognition is most Brain-Like? (January 2020)
[2] Zamir, A., Sax, A., Shen, W., Guibas, L., Malik, J., Savarese, S. 2018. Taskonomy: Disentangling task transfer learning. IEEE Computer vision and pattern recognition conference (CVPR).
[3] Mehrer, J., Spoerer, C., Jobes, E., Kietzmann, T. 2021. An ecologically motivated image dataset for deep learning yields better models of human vision. Proc. Natl. Acad. Sci. U. S. A. 118(8).
[4] Yamins, D.L.K., Hong, H., Cadieu, C.F., Solomon, E.A., Seibert, D., DiCarlo, J.J.: Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc. Natl. Acad. Sci. U. S. A. 111(23) 307 (June 2014) 8619–8624
[5] Schrimpf, M., Kubilius, J., Hong, H., Majaj, N.J., Rajalingham, R., Issa, E.B., Kar, K., Bashivan, P., 327 Prescott-Roy, J., Geiger, F., Schmidt, K., Yamins, D.L.K., DiCarlo, J.J.: Brain-Score: Which artificial neural network for object recognition is most Brain-Like? (January 2020)
[6] Jozwik, K., Schrimpf, M., Kanwisher, N., Dicarlo, James. 2019. To find better neural network models of human vision, find better neural network models of primate vision. BioRxiv.
[7] Conwell, Colin, et al. "What can 5.17 billion regression fits tell us about artificial models of the human visual system?." SVRHM 2021 Workshop@ NeurIPS. 2021.
[8] Heeger, D., Ress, D. 2002. What does fMRI tell us about neuronal activity? Nature reviews neuroscience.
Pdf: /pdf/fcbbcd7bdef390d4cae0942df5ffcac8140265cd.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Causal Imitability Under Context-Specific Independence Relations | Accept (poster) | Summary: The paper studies causal imitation learning. In particular, it extends traditional causal graphs with context specific independence. Although the original causal imitation learning problem can be reduced to d-seperation test, causal imitation learning with context specific independence is NP hard. But under other conditions, causal imitation learning with CSI can be solved more efficiently.
Strengths: The problem is well motivated and theoretical contributions seem solid.
Weaknesses: 1. Theoretical results rely heavily on (Zhang [et.al](http://et.al), 2020) and might appear derivative. It would be better if the authors can explain a bit more on the contributions in contrast to existing works, especially in terms of identifiability, because NP-hardness is not surprising.
2. Experiments are relatively weak as they are only tested on synthetic datasets.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors:
1. Context-specific independence seems like a specific form of structural equations? For instance, in the wage example, CSI can be incorporated into how the function w = f(e, u) is defined. I guess it does not work with Pearl’s do-calculus because of a violation of faithfulness? My point is that it does not seem like causal graphs need refinement to incorporate CSI?
2. In line 134, replacing fx with stochastic mapping \pi in mentioned as soft interventions. But as far as I know, soft interventions do not allow adding edges between intervened variable and its non-parent?
3. In definition 3.5, what’s intuition that the edges incident to W are deleted?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback, and we are delighted to learn that they found our contribution to be solid. Below, the main points and questions raised by the reviewer are addressed.
---
>Experiments are relatively weak as they are only tested on synthetic datasets.
Indeed, our experiments were primarily aimed at illustrating the potential benefits of taking context-specific information (CSIs) into account in the imitation learning process, particularly in terms of 'achievability' of imitation learning. We agree that the current experiments are limited to synthetic datasets, and we acknowledge that testing the proposed algorithms on real-world datasets is essential to validate their practical applicability.
As we move forward, we plan to address this limitation. Developing more efficient algorithms for handling CSIs as well as conducting extensive experiments on real-world datasets to evaluate the performance and efficiency of our approach in practical scenarios will be a part of our future work to enhance the overall usability and scalability of the proposed methods.
---
> Theoretical results rely heavily on (Zhang et.al, 2020) and might appear derivative. It would be better if the authors can explain a bit more on the contributions in contrast to existing works, especially in terms of identifiability, because NP-hardness is not surprising.
While we acknowledge that we drew inspiration from previous work, the introduction of CSIs and their influence on imitability are the key contributions that distinguish our approach and open up new possibilities for imitation learning in complex scenarios. In our experiments, particularly illustrated in Figure 4, we demonstrate how accounting for CSIs can significantly increase the number of instances that become imitable, showcasing the practical benefits of our proposed framework. We will emphasize these aspects in the revised version.
---
>1. Context-specific independence seems like a specific form of structural equations? For instance, in the wage example, CSI can be incorporated into how the function w = f(e, u) is defined. I guess it does not work with Pearl’s do-calculus because of a violation of faithfulness? My point is that it does not seem like causal graphs need refinement to incorporate CSI?
It is absolutely correct that context-specific independence (CSI) is inherent in the functional form of the structural causal model (SCM). However, when it comes to the graphical representation of causal relationships, traditional causal graphs, such as directed acyclic graphs (DAGs), are not equipped to directly incorporate CSIs without additional refinement.
CSI relations imply conditional independence relationships between certain subsets of variables given specific contexts. While this information can be encoded in the functional form of the SCM, it is not explicitly captured in the standard graphical representation. Traditional DAGs represent the causal relationships between variables in a global sense, but they do not distinguish between different contexts in which these relationships might vary.
We took advantage of the concept of labeled DAGs to represent context-specific causal relationships explicitly, making it possible to distinguish between different contexts and their corresponding conditional independencies.
---
>2. In line 134, replacing fx with stochastic mapping $\pi$ in mentioned as soft interventions. But as far as I know, soft interventions do not allow adding edges between intervened variable and its non-parent?
Indeed in the literature, a soft intervention is often used to refer specifically to interventions that do not change the parents of a variable. In our work, we used the term "soft intervention" in a broader sense to encompass any form of intervention that may or may not change the set of parents. We will clarify this terminology in the revision to ensure consistency with the existing literature. We thank the reviewer for bringing this to our attention.
>3. In definition 3.5, what’s intuition that the edges incident to W are deleted?
We aimed at constructing a subgraph such that imitability in the original graph would imply imitability in the subgraph (which, in our work is the context-induced subgraph of definition 3.5).
Equivalently, any non-imitable model over the subgraph must imply non-imitability in the original graph.
To have the flexibility in defining a non-imitable model, intuitively speaking, we would like the context variables $W$ to be completely detached from the rest of the model. This allows us to define a structural causal model (SCM) on the subgraph, where the model over $W$ is independent of the model over $V\setminus W$. For further details, please see the proof of Lemma 3.6.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply! I don't have any further questions at this moment. | Summary: This paper studies the problem of causal imitation learning, where the goal is to construct a policy that replicates the outcomes of an expert. The challenge is that the expert may e.g., make decisions based on unobserved variables. Prior work (e.g., [36] as cited in this paper) has established graphical conditions under which imitation can still be achieved. This paper shows that context-specific independence relations (or "CSIs"), if known, can expand the space of scenarios where imitation can be achieved, and give algorithms for performing imitation learning in this setting. In addition, this paper gives an algorithm for potential identification when the graphical criterion fails (see Proposition 4.2, Alg 2, Theorem 4.3), although this algorithm is not necessarily complete, a limitation acknowledged in the conclusion
Strengths: The contribution of this paper is quite clear, technically interesting, and novel. Context-specific independence relations have been studied before in the causal inference literature (see e.g., citation [32]) in the context of causal identification, and so has the problem of causal imitation learning (see e.g., [36]). However, from my reading, this paper does more than simply put these concepts together (CSIs and causal imitation learning) in an obvious way. As it happens, this paper shows that incorporating CSIs renders the problem of imitation learning NP-hard
While there is a substantial amount of technical detail to parse, which perhaps makes the paper a bit difficult to skim, I found the paper to be reasonable clear in the technical presentation given a close read. Likewise, the authors clearly outline some of the limitations of their proposed approach, e.g., the fact that they provide a sound (but not necessarily complete) algorithm in Section 4.
The synthetic experiments are somewhat minimal, but I would not expect extensive experiments in a more theoretically-oriented paper like this one, and I found the setup of the synthetic experiments to be well-motivated, exploring the benefits of their approach over random graphs.
Weaknesses: First, in order to be practically applicable, this method requires some knowledge of context-specific independence relations, where there is a hard independence between certain variables in certain contexts.
However, the motivating examples given in this paper for context-specific independence seem somewhat unrealistic. The examples given in the paper are
* Lines 60-65: No impact of education on wages when unemployment is high
* Lines 66-70: In heavy traffic, no impact of speed limit on driving
* Lines 250-252: Company pricing is independent of demand during a recession
Of these, the second example seems most realistic. In the others, complete independence between variables in those contexts seems unrealistic.
Second, I would not overstate the "straightforward" nature of solving for $\pi^*$ in Section 4 (see e.g., lines 257-259, "solving the aforementioned linear system of equations for $\pi^*$ is straightforward, for it boils down to a matrix inversion"). As mentioned in the footnote, this is only generally true in discrete settings, and even then may not be very practical with large numbers of variables, or variables with large cardinality. Moreover, moving from the discrete to continuous setting introduces some substantial technical difficulties with e.g., ill-posed inverse problems. This aspect is not the main focus of the paper, so I consider it a somewhat minor piece of feedback.
As an additional minor point, there is some lack of clarity in the experiments; E.g., clarifying what is $\pi_{ALG}$ versus $\hat{\pi}_{ALG}$ in Table 1. There are also some minor typos
* Line 25 "bypass IRL step" -> "bypass the IRL step"
* Line 28 "is for the most part result of" -> "is for the most part the result of"
* Line 75, missing space "For instance,[32]"
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: The main weakness of this paper, in my view, is the plausibility of finding context-specific independences in real-world problems.
Are there other motivating examples, beyond those discussed already in the paper, that the authors would consider particularly compelling?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough feedback. We acknowledge their positive feedback regarding the novelty and clarity of our work. We will now turn our attention to the points raised by the reviewer.
---
>The main weakness of this paper, in my view, is the plausibility of finding context-specific independences in real-world problems. Are there other motivating examples, beyond those discussed already in the paper, that the authors would consider particularly compelling?
In addition to the examples discussed in the paper, CISs have been utilized to analyze, for example, gene expression data [1], proteins [2], dynamics of pneumonia [3], parliament elections, prognosis of heart disease and occurrence of plants [4]. For a simple instance, consider an antibiotic that normally has a dose–response effect on the number of bacteria. However, due to a genetic mutation, the bacteria become resistant to the antibiotic, resulting in an independent relationship between the dose and the number of bacteria in the context of this mutation. As another example, In general, smoking has a causal effect on blood pressure. Nevertheless, if a person's ratio of beta and alpha lipoproteins exceeds a specific threshold, smoking is unlikely to have any significant impact on their blood pressure [5].
Moreover, we envision that CSI relations can be particularly compelling in fields such as epidemiology, environmental sciences, public policy, and social sciences. For instance, in epidemiology, understanding the context-specific effects of certain risk factors on disease outcomes can be critical for designing targeted interventions and public health policies. In environmental sciences, the interactions between environmental variables and their effects on ecosystems may exhibit context-specific behavior, and CSIs can help unravel these complex relationships.
Furthermore, in social sciences, studying the impact of social interventions or policies on different subgroups of a population may reveal context-specific causal patterns. For example, the effectiveness of a job training program may vary based on the participants' prior work experience, educational background, or age group.
While the plausibility of finding CSIs in real-world scenarios may present challenges, we believe that their existence and relevance in multiple domains make them a compelling concept to explore and incorporate into causal modeling and inference approaches.
[1] Y. Barash and N. Friedman. Context-specific Bayesian clustering for gene expression data.
Journal of Computational Biology, 9(2):169–191, 2002.
[2] B. Georgi, J. Schultz, and A. Schliep. Context-specific independence mixture modelling
for protein families. In European Conference on Principles of Data Mining and Knowledge
Discovery, pages 79–90. Springer, 2007.
[3] S. Visscher, P. Lucas, I. Flesch, and K. Schurink. Using temporal context-specific independence
information in the exploratory analysis of disease processes. In Conference on Artificial
Intelligence in Medicine in Europe, pages 87–96. Springer, 2007.
[4] H. Nyman, J. Pensar, T. Koski, and J. Corander. Stratified graphical models-context-specific
independence in graphical models. Bayesian Analysis, 9(4):883–908, 2014.
[5] Edwards, D. and Toma, H. (1985). A fast procedure for model search in multidimensional contingency tables. Biometrika.
---
> As an additional minor point, there is some lack of clarity in the experiments; E.g., clarifying what is $\pi_{ALG}$
versus $\hat{\pi}_{ALG}$ in Table 1.
We thank the reviewer for bringing this to our attention. We will provide the following clarification in the experiments section: $\pi$ refers to the policy that the algorithm would have learned with an infinite number of samples, while $\hat{\pi}$ is the policy it actually learns with the given finite sample size.
By including this distinction in Table 1, we aimed to show that the sub-optimality of the naive algorithms is not due to the limited sample size but is inherent to the approach itself.
We also thank the reviewer for other minor comments.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thoughtful response! If space permits, adding more concrete examples like those to the introduction would be helpful in my view. I will maintain my generally positive score (Accept). | Summary: This paper explores the potential benefits of incorporating context-specific independence (CSI) information into causal imitation learning, where CSI relations are known. The authors prove that the decision problem for the feasibility of imitation in this setting is NP-hard, provide a necessary graphical criterion for imitation learning under CSI, and propose an algorithmic approach for causal imitation learning that takes both CSI relations and data into account.
Strengths: **Clarity**
1. This paper is well-written and self-contained. It covers previous literature thoroughly.
2. The introduction section does an excellent job of motivating the research problem, and the problem statement is clear.
**Significance**
1. The research problem is interesting and significant in practice.
**Literature**
1. This paper covers related literature extensively.
**Soundness**
1. All results appear to be sound.
Weaknesses: **Clarity in Assumptions**
1. I believe that the assumption used in Proposition 3.9 requires justification. Without an explanation, it is difficult to assess the generality of the assumption. It would be interesting if the simulation described in Section 5.1 provided the fraction of instances in which the assumption was satisfied.
**Lack of real-world dataset analysis.**
1. I believe the significance of the paper lies in its practical benefits of considering the Channel State Information (CSI), which is prevalent in the real world. Therefore, it would be beneficial to have a simulation scenario that incorporates a real-world dataset.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Q1. How strong is the assumption that "the context variables have parents only among the context variables"? It is difficult to discern the insights from which this assumption was generated without reasoning on it.
- Q2. Equation (1) is difficult to parse. What is V’ here? Could you simplify it or provide a verbal explanation?
- Q3. "The labels compatible with w" should be formally defined.
- Q4. Is "G_{w}" defined? Does it refer to the context-induced subgraph of G^{L} with respect to w?
- Q5. What is the computational cost for evaluating pi^{*} in Theorem 3.10? Isn't it still exponential in evaluating the equation?
- Q6. Is the result valid for continuous variables? It seems that the paper assumes discreteness throughout.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: 1. There is a lack of justification for the assumption, making it difficult to assess the clarity and significance of the statement.
2. I believe the significance of the paper lies in its practical benefits of considering the Channel State Information (CSI), which is prevalent in the real world. Therefore, it would be beneficial to have a simulation scenario that incorporates a real-world dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable input and appreciate the positive feedback regarding the significance of our work. We address the main points and questions raised by the reviewer below.
---
**Regarding the assumption of Proposition 3.9**
>I believe that the assumption used in Proposition 3.9 requires justification.
>Q1. How strong is the assumption that "the context variables have parents only among the context variables"?
The assumption used in Proposition 3.9 is based on the premise that the context variables are only influenced by other context variables and not by non-context variables. This assumption implies that non-context variables do not have a causal impact on the context variables.
To illustrate this, consider the example discussed in Section 4, where a company aims to maximize revenue. It is reasonable to assume that factors such as pricing policy or demand rate, which are non-context variables, do not influence the context variable $C$ representing the macroeconomic variable of recession.
However, it is important to note that if in some instances, certain context variables are influenced by factors other than $C$, the corresponding CSI relations can be disregarded without compromising the overall approach. In such cases, our approach remains applicable, although with a potential loss of imitability.
In summary, while the assumption of having parents only among the context variables is ideal, our approach can still be adapted and remains valid in cases where this assumption does not strictly hold.
---
> Q2. Equation (1) is difficult to parse. What is $V’$ here? Could you simplify it or provide a verbal explanation?
$C(\mathcal{L})$ in equation (1) is the set of variables, at least one realization of which results in a context-specific independence (or removal of an edge, graphically speaking).
In particular, $\mathbf{V}'$ is an arbitrary subset of nodes containing $V_i$ in Equation (1). The argument is that if there exists some arbitrary subset $\mathbf{V}'$ of nodes containing $V_i$ such that a realization $\ell$ of this subset ($\mathbf{V}'$) results in an independence (e.g., of some $V_j$ and $V_k$), then $V_i$ is considered as a context variable.
We thank the reviewer for this feedback and we will simplify this equation and include further explanation in the revision.
---
>Q3. "The labels compatible with w" should be formally defined.
In our context, labels refer to realizations of a subset of variables. Formally, we say that a label $\ell$ is compatible with a realization $w$ if they are consistent, meaning they have the same value on the intersection of the variables to which they assign a value.
We will add this explanation in the revision.
---
> Q4. Is "G_{w}" defined? Does it refer to the context-induced subgraph of G^{L} with respect to w?
Yes, indeed. We thank the reviewer for pointing this out, and we will clarify that this symbol refers to the context-induced subgraph of $\mathcal{G}^\mathcal{L}$ w.r.t. $\mathbf{w}$, as defined in definition 3.5.
---
>Q5. What is the computational cost for evaluating pi^{*} in Theorem 3.10? Isn't it still exponential in evaluating the equation?
Algorithm 1 runs linearly many loops in terms of the number of contexts. The loop itself requires testing a d-separation which is quadratic-time in the worst case, in the number of variables. The number of possible contexts, however, can still be exponential in the number of variables. This is inevitable considering the hardness result of Theorem 3.8.
---
>Q6. Is the result valid for continuous variables? It seems that the paper assumes discreteness throughout.
The discreteness assumption can indeed be relaxed under certain considerations.
In particular, Algorithm 1 requires assessing the d-separation of line 5 in the context-induced subgraphs $\mathcal{G}_c$.
Even when the context variables $C$ are continuous, the domain of these variables can be partitioned into at most $2^m$ equivalence classes in terms of their corresponding context-induced subgraph, where $m$ denotes the number of labeled edges.
This holds since the number of context-induced subgraphs cannot exceed $2^m$.
It is noteworthy, however, that solving the equation referred to in line 10 of Algorithm 2 for continuous variables may bring additional computational challenges. We will elaborate on this point in the revision.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: The authors' rebuttal adresses my questions and concernts. I will maintain the positive assessment. | Summary: This paper extends studies on causal imitation learning to settings in which additional information can be provided in the form of context-specific independences (CSIs). Causal imitation learning seeks to maximize some unobserved reward $Y$ by finding a policy $\pi^*$ from the space of policies $\Pi$ such that the reward distribution under that policy, $P(Y \mid do(\pi))$ matches that of the expert policy, $P(Y)$, thereby mimicking the expert. However, mimicking $P(Y)$ is not always possible in settings with unobserved confounding, so additional knowledge in the form of causal constraints is necessary to decide whether $P(Y)$ is imitable. In prior works where such constraints are assumed in the form of a causal diagram $\mathcal{G}$, it has been proven that $P(Y)$ is imitable if and only if there exists a $\pi$-backdoor admissible set $\mathbf{Z}$ w.r.t. $\langle \mathcal{G}, \Pi \rangle$. However, completeness no longer holds when CSIs are provided, in which further independence information about the distributions may be provided, conditioned on specific settings of the variables. The paper first proves that deciding imitability in this more general setting is NP-hard (Thm. 3.8). Given this limitation, they provide a sound algorithm that checks imitability within each context-induced subgraph, which they prove is also complete under a further assumption (Alg. 1). In settings where this assumption does not hold, they show that imitability can still be achieved if a policy on a set of surrogate variables is identifiable under a specific setting of CSIs. This provides a more general algorithm (Alg. 2), evaluated experimentally.
Strengths: To be fully transparent, I have reviewed this paper in the past, so the following list of strengths and weaknesses reflect my existing impressions of this paper, which I have updated following another read of the latest manuscript. However, my points are largely the same, since I do not see many notable changes in the latest version. Please correct me if I am wrong.
**Strengths:**
1. Problem is well-motivated. LDAGs arise in practice and provide more information than standard causal diagrams, which should be leveraged to allow more imitable cases. This would add more positive results to the causal imitation learning literature.
2. Assumptions are clearly stated. LDAGs are well defined, and the paper does a good job of explaining specifically how the additional constraints are incorporated. The authors are very transparent with the limitations of their approach.
3. The solution is nontrivial and interesting. The paper clearly shows non-imitable cases that are rendered imitable when accounting for CSIs. The obvious solution of checking for backdoor sets in each context-induced subgraph is only complete under a strict assumption.
Weaknesses: I reiterate that these points are similar to the points I made in my previous read of the paper. I have an overall positive opinion of the paper, and my goal is to help improve the paper, so I hope the authors will take this feedback into consideration.
**Weaknesses:**
1. There is nothing done to address the NP-hardness claimed by Thm. 3.8. Alg. 1 and 2 still take exponential time in the worst case. It may be insightful to provide some alternative settings in which assumptions are strict enough that polynomial time solutions can be developed. Alternatively, algorithms that sacrifice completeness for speed could be provided. While I understand that this may be out of the scope of the paper, it is otherwise not clear to me why Thm. 3.8 is relevant to the paper in the first place.
2. Unfortunately, even Alg. 2 is not complete in the general case. Completeness is not a requirement for it to serve as a real contribution, but it would help to have some insights in this paper on why Alg. 2 is incomplete (e.g. some counterexamples) and some ideas on what could be done to move towards completeness.
3. The motivation for Alg. 2 could be improved in terms of clarity. Notably, it could be emphasized why Alg. 1 fails in a more general setting. Eq. 3 could be explained better as well to motivate the idea of context-specific surrogates (I did not really understand Eq. 3 until I derived it myself by hand).
4. The experiments illustrate the point as intended, but the tested scenarios are very limited in scope. The first experiment only studies a specific family of graphs, and the second experiment is performed on one specific SCM. Neither of these choices are justified. While a more extensive empirical study would boost the strength of this work, it would help immensely just to be transparent about the data generating process to ensure that there was no cherry picking. For example, for Sec. 5.1, why were those choices of delta and probability of latent variables chosen? And for Sec. 5.2, why were the parameters for that specific SCM chosen (as described in Appendix C)?
5. Many of the ideas in this paper (including the interesting point about surrogates) incrementally improve existing ideas from Zhang et al. (2020). This reduces some of the novelty.
6. This is a minor point and did not affect my judgment of the score, but on line 261, the authors describe the imitability problem under CSIs using the inputs of $\langle \mathcal{G}^{\mathcal{L}}, \Pi, P(\mathbf{O}) \rangle$ as opposed to $\langle \mathcal{G}^{\mathcal{L}}, \Pi \rangle$. I understand that this is due to the CSI setting requiring additional information in the form of constraints over $P(\mathbf{O})$, but I think this could be better framed. Both the original imitability problem and the new version with CSIs uses $P(\mathbf{O})$, since it must be clear that only observational data is available, as opposed to additional interventional data such as $P(\mathbf{O} \mid do(\mathbf{x}))$, collected from experimentation. In addition to this however, the CSI setting should include a set of independences.
Overall, my impression is that this paper is worth publishing, since everything is well-defined, the proofs make sense, the assumptions are clear, and the claims are sufficiently backed. I believe that a score of 6 is appropriate given the level of contribution of the paper, which is solid but somewhat limited.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: No questions, but would be interested in hearing author responses in case I missed something.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are clearly stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable input.
**Regarding NP-hardness and polynomial-time solutions**
We agree that providing settings where a polynomial-time solution exists is insightful, and we thank the reviewer for this suggestion. We have come up with certain restrictions that allow for polynomial-time solutions. One such case is bounded-degree graphs. We will discuss these settings along with their corresponding polynomial-time solutions to provide further insights to the reader.
---
**Regarding Algorithm 2**
We appreciate the reviewer's understanding that completeness is a challenging requirement for the proposed Algorithm 2. While achieving completeness for imitation in this setting poses significant difficulties, we acknowledge the importance of providing insights and counterexamples to aid the readers in understanding the limitations and potential improvements of our work.
In the revised version of the paper, we will include a discussion of counterexamples and scenarios where Algorithm 2 may not produce the optimal solution.
As a concrete example, take for instance the instrumental variable graph ($C\gets X\gets U\to Y$, $X\to Y$), where $U$ is an unobserved confounder, $Y$ is a binary unobservable reward, and $C$ and $X$ are observed binary random variables. Also suppose that the edge $U\to X$ has the label $C=0$. This is similar to our Figure 3 (a) in the text, without the surrogate $S$.
It is evident that if the set of equations
$$\left(P(y\vert do(\pi))=\right)P(y\vert X=0, C=0)\pi(X=0)+P(y\vert X=1, C=0)\pi(X=1) = P(y\vert X=0)P(X=0)+P(y\vert X=1)P(X=1) \left(=P(y)\right)$$
for both values of $y=0$ and $y=1$ is a set of linear equations with two equations in two parameters, namely $\pi(X=0)$ and $\pi(X=1)$, which always has a solution.
This indeed indicates that this instance is imitable, i.e., there always exists at least one imitation policy. However, Algorithm 2 would not be able to find any such policy.
---
**Regarding the experiments**
Indeed our experiments were primarily aimed at illustrating the potential benefits of taking context-specific information (CSIs) into account in the imitation learning process, particularly in terms of 'achievability' of imitation learning. We acknowledge that the current experiments are limited to synthetic datasets, and we agree that testing the proposed algorithms on more extensive datasets is essential to validate their practical applicability. As we move forward, we plan to address this limitation. Developing more efficient algorithms for handling CSIs as well as conducting extensive experiments on real-world datasets to evaluate the performance and efficiency of our approach in practical scenarios will be a part of our future work to enhance the overall usability and scalability of the proposed methods.
We also thank the reviewer for other minor points and comments.
---
Rebuttal Comment 1.1:
Title: RE: Rebuttal by Authors
Comment: I have read the rebuttal, and I thank the authors for addressing my concerns. I will raise my score to a 7 under the assumption that the authors will add the promised revisions to the paper. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Nearly Optimal Bounds for Cyclic Forgetting | Accept (poster) | Summary: Authors provide theoretical bounds on the forgetting quantity in the continual learning setting for linear tasks, where each round of learning corresponds to projecting onto a linear subspace.
For a cyclic task ordering on $T$ tasks and an arbitrary iteration $m$, they prove the upper bound of $O(T^3/m) $ on the forgetting. For the proof they use a bound on the numerical range of a product of T projections.
Strengths: -near optimal bounds are proven for the worst case for cyclic task ordering(also improving the bounds for less general settings if I understood the related work section correctly): their upper bound $O(T^2/m)$ compared to a $\Omega(T/m)$ lower bound and the previous known upper bounds $O(T^2/\sqrt{mT})$.
-also provides a bound on the numerical range of a product of $T$ projections
Weaknesses: (Common) notations could be defined earlier or there could be a reference to the noation section when first notaions are used in the introduction.
There is still a gap of T if I understood everything correctly.
As I am unfamiliar with some pieces of related work and Math/other details were not carefully checked so it is hard to rate the novelty.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What is the differnce between the $T^3/m$ bound in the abstract and the $T^2/m$ bound presented in the paper?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As they mention in the conclusion there are some gaps for real projections and for forgetting, all relevant projections are real projections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Notation: We will clarify the notation in the introduction as suggested.
Gap of $T$: To clarify: The upper bound given in our paper has worse $T$-dependence than the lower bound given in [Evr+22], but it has the same $T$-dependence as their upper bound for any fixed dimension. Our bound of $O(\frac{T^2}{m})$ may appear to have worse $T$-dependence than their dimension-independent bound of $\frac{T^2}{\sqrt{mT}}$ if $m\ll T$, but in this setting both our bounds are worse than the trivial bound of 1.
Real projections: We have since resolved the real case in the following sense: [Evr+22] bounds the forgetting after $m$ cycles of $k$ tasks in terms of $\sup_A\lVert A^m(I-A)\rVert$ where $A$ ranges over all products of $k$ (real) orthogonal projections. Our paper bounds $\sup_A\lVert A^m(I-A)\rVert$ in terms of $\sup_{z\in W(A)}\lvert z^m(1-z)\rvert$ where $A$ ranges over all products of $k$ orthogonal projections and $W(A)$ denotes the numerical range of $A$. Our original paper computed the supremum over all products of complex orthogonal projections, leaving open whether adding the condition that the projections are real improves the bound. We have since shown that real projections can attain the same supremum,
Abstract typo: The $T^3/m$ in the abstract is a typo and should say $T^2/m$.
---
Rebuttal Comment 1.1:
Comment: To the authors: your response has been read and is being considered. | Summary:
The paper studies the setting of continual learning for linear tasks, and in particular the phenomena of catastrophic forgetting.
They prove the best known upper bound on forgetting for the setting of cyclic tasks.
Strengths: The paper demonstrates an upper bound on Forgetting that is independent of the dimension.
This is a strong improvement for settings with very large dimension spaces.
Weaknesses:
I think the main weakness is the significance of contribution, compared to previous work.
This result build on recent work by [Evron et al.22] and demonstrates improvement in a certain setting (cyclic).
The paper would be strengthened if either the results would be extended to other setting, and incorporated experimental evaluations - both of which similarly to [Evron et al.22].
Additional comments:
1. Not clear why the paper use the term “near-optimal” :
- The known lower bound from [Evron et al.22] at T^2/m. The bound given in this work has a worse dependence on T compared to [Evron et al.22]. The improvement comes for large dimensions, which is indeed important but not necessarily near-optimal ?
2. Forgetting vs. Regret: by Evron et al., in the cyclic setting both quantities of Forgetting and Regret will go to 0 as m increases. Since regret is a well studied quantity, it is interesting to compare these and examine how will the convergance rate behave ? a connection between them may allow to build on the vast literature on regret.
Minor comments:
- Notation: I assume you mean $X_i \in \mathbb{C}^{r_i \times d}$. Maybe say r_i refers to samples each with d dimensions.
- Line 92 : “We give nearly a nearly optimal bound”
- Citations do not indicate the names of all authors, incorrect format
- Lines 52-53 - why write T^2/*mT instead of T/m ?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: none.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Significance: We would be happy to add a remark about other settings and compare our approach to that of [Evron et al.22]. The challenge with experimental results is that our bounds are worst-case bounds. Empirically, nearly-worst-case examples seem to be quite infrequent.
However we could include a simple experiment showing empirical performance on the examples given in [Evron et al.22] that were used to obtain (theoretical) lower bounds on forgetting.
Near-optimal bounds: Please see the response to reviewer kHo7 above titled "Gap in $T$." In addition, we would be happy to modify the title to replace "Nearly Optimal" with "Dimension Independent" or "Improved ... for high dimensions", since we also agree that the improvement comes for high dimensions, which is quite important.
Forgetting vs. Regret: We think that regret is also interesting to study. A worst-case bound on the regret is indeed possible, however is fairly simple and does not require our results. To give a quick sketch: In our setting one can bound the regret (see Evron et. al. for a definition) by the average of the squared increments $\|w_{k+1} - w_k\|^2.$ By iterating the Pythagorean theorem one sees that $1 = \|w_0\| = \|w_k - w_*\|^2 + \|w_k - w_{k-1}\|^2 + \ldots + \|w_1 - w_0\|^2$ which gives a (tight) bound of $1/k$ on the regret. If one defines regret with the non-squared loss, then it turns out that one can give a tight bound of $1/\sqrt{k}$ using a slightly more complicated (but still elementary) argument. (Both bounds are attainable in the cyclic setting, and both bounds also hold without the cyclic assumption.)
Taking average regret over the last cycle is more relevant to our results. In this case, with $m$ cycles of $T$ tasks each, the average regret becomes $\frac{\|w_{mT}\|^2-\|w_{m(T-1)}\|^2}{T}=\frac{\Delta_{m-1}(w^*)}{T}$. Combining this with the equation in line 98 of our paper bounds the worst-case forgetting by $\frac{T(T-1)}{2}$ times the worst-case average regret over the last cycle.
Minor comments: We will correct the formatting and notation as suggested. Our original reason for writing $\frac{T^2}{mT}$ was to make it look more similar to the other bounds in that sentence ($\frac{T^2}{\sqrt{mT}}$ and $\frac{T^2d}{mT}$), and thus easier to compare; An earlier version of our paper stated our main bound as $\frac{T^3}{mT}$. Furthermore, $mT$ is the total number of iterations, which may help interpret the bound. However, we ended up stating our main result using $\frac{T^2}{m}$ because, as an independent theorem, the first advantage does not apply.
---
Rebuttal Comment 1.1:
Comment: To the authors: your response has been read and is being considered.
---
Rebuttal Comment 1.2:
Comment: Thank you for your answer, and for the suggested modifications. | Summary: This paper describes bounds for cyclic forgetting when an overparametrized linear model is fit successively to a series of tasks. The exact same setting has been studied fairly recently before in the , and the main contribution here is to improve dimension dependence of the bounds.
Strengths: The theory of Theorem 5 is clean and beautiful, and certainly of mathematical interest in its lack of dependence on ambient dimension. The core idea is creative and the writing is good.
Lines 56-62 of the text are valuable for intuition as to why the suboptimal dimension dependence arises, and how to fix it. This is a very interesting insight which could potentially be exploited for more data-instance-dependent bounds in the future.
Weaknesses: Catastrophic forgetting arises especially when there are certain types of dependencies/correlations between the tasks. This entire manuscript is about studying that situation theoretically, but it completely neglects to provide. The manuscript relies on the publication [Evr+22] to set precedent for the problems studied. However, that paper was pathbreaking in setting up the problem and establishing that cyclic task orderings don't suffer catastrophic forgetting, and included much more interpretation and exposition linking the theory result to the motivation for studying the problem. The current manuscript does not give any of this. Instead, the main result obviates all dependencies on the tasks other than their number, so it does not lend much further insight into the problem.
The proofs are far too long and technical for the main body of the paper - they should be moved to the appendices. The entire discussion leading up to Theorem 5 is full of mathematical beauty arising from symmetry - but none of this is in the service of the task of learning from different tasks, only using the generic fact that the tasks are projections.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Please make a pass for key typos. The abstract says T^3 instead of T^2; lines 52-53 shouldn't have T in the denominator anywhere; and many more.
- Is there scope to extend these results to parametrize dependencies between tasks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Precedent for Problem: There are indeed motivations for our work discussed in referenced paper [Evr+22], that we list here, and will elaborate on in our introduction.
Many data sets in machine learning are cyclic or periodic in nature, for example, due to the "day of the week effect" in financial data or search engine data. In a manufacturing facility using robots, a machine or robot is typically instructed to repeat a series of tasks for producing certain products. Learning this type of task on arrival can be formulated as cyclic continual learning, which is our main subject of study in terms of catastrophic forgetting.
Additionally the methods of cyclic alternating projections (by Von Neumann and Halperin) and cyclic Kaczmarz methods are well-studied methods for solving linear systems. Our work can be thought of as studying the worst-case forgetting of these popular methods. Equivalently one can think of this as studying residual bounds for (cyclic) Kaczmarz-type algorithms. While very natural, this is somewhat of a new take on analyzing the convergence of these methods. [Evr+22] mentions this connection, but leaves open the problem of obtaining tight convergence bounds. We think this problem is sufficiently natural to justify study.
Finally since high-dimensional data is so ubiquitous in machine learning, we believe that the dimension dependence was a major weakness in the bounds of [Evr+22]. Indeed our bound captures a qualitative phenomenon: the worst-case forgetting need not scale at all with the dimension of the ambient data.
The equations in lines 98 and 99 bound the forgetting of any single sequence of $T$ tasks after $m$ cycles by $(1+\sqrt2)\frac{T-1}{2}\sup_z\lvert (1-z)z^m\rvert$ where $z$ ranges over the numerical range of the product of the projections in that sequence. Our paper bounds this over the class of all sequences of $T$ tasks by characterizing the union of their corresponding numerical ranges, but we do not yet have any simple subclass of sequences of tasks over which the supremum of this value over that class has a better bound. We have later characterized all tasks that attain the maximum possible value in each dimension and shown that none of them can optimize the forgetting.
Proofs Location: As a characterization of the union of the numerical ranges of the products of $k$ complex (and, after submission, also real) projections (for any fixed dimension) is mathematically interesting in its own right, we feel that some of the proof should perhaps be left in the main body. However, we agree that it would make sense to move more technical portions to the appendix, leaving more room for discussion.
Typos: We will indeed make a thorough pass for typos. Note that lines 52-53 do not contain a typo, see ``Minor comments" in the Reviewer SAj1 response. We agree that this could be more clear however.
Task Dependencies: Dependencies between tasks is an interesting setting, and we do believe that this is interesting future work. Since this is a relevant question for a reader, we will add a remark discussing the challenges.
---
Rebuttal Comment 1.1:
Comment: To the authors: your response has been read and is being considered. | Summary: Consider an overparametrized system solving a periodic sequence of tasks in linear (least squares) regression in the following way: starting from a weight vector $w_t$ for task $t$, perform gradient descent to solve task $t+1$, which, due to overparametrization, will eventually return an exact solution $w_{t+1}$ for task $t+1$. Here $w_0 = 0$ and each task is specified by a set of input vectors $X$ and output labels $y$.
The procedure amounts to projecting $w_t$ onto the solution (hyper-)space of task $t+1$ and implies a loss of information about task $t$ and all previous tasks. This raises the question just how much is forgotten after n tasks have been visited, which can be quantified as the error of $w_n$ averaged over all previous datasets. While this "forgetting" depends on the data, it is still possible to give a worst case bound for the worst possible periodic task sequence with period $T$.
Previous work gives a bound of $T^2/\sqrt{nT}$ and a dimension dependent bound, as well as a lower bound of order $T^2/n$, the paper at hand gives order $T^3/n$, which has optimal dependence on n, though not on T.
This result is obtained by a reduction of the problem to a bound on the numerical range of a polynomial in $T$ projection operators, which is obtained by an intricate analysis, which (because of time constraints) I didn't completely verify, although I didn't find any flaws.
Strengths: The paper approaches an interesting problem with an equally interesting mathematical technique. The given bound is a relevant improvement and to me seems an important contribution to the subject of continual learning.
The problem is clearly stated, the minor limitations of the main result are nicely exposed already in the title by the word "nearly".
Weaknesses: The paper is technically very heavy in the proofs of Lemma 4 and Theorem 5. If possible the authors might sketch the basic ideas of the proof of Theorem 5 in a short paragraph.
The statement of Lemma 4 is somewhat opaque and its relevance to the problem is not immediately clear. It doesn't help that $\Gamma_k$ shows up in the proof of Lemma 4 (l154), while its definition comes on the next page in the statement of Theorem 5.
The paper leans strongly on the reference [Evr+22], without which it is very difficult to understand. At least Theorem 11 in [Evr+22] could be reproduced.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The various bounds are sometimes expressed in terms of the number of iterations and sometimes in terms of the number of cycles. This may cause some confusion. Would it be possible to unify this?
The $m$ in l 43 and in (1) seems to be a typo.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Technical Proofs: As Reviewer eHkq suggested, we will move part of the proofs to the appendix and include a high level proof in the main body. We summarize the proof of Theorem 5 as follows: To compute the range of $P$, we show that the outer boundary of $P$ is the claimed sinusoidal spiral and show that the range is simply connected, which implies that the range is the filled sinusoidal spiral. Once we get the outer boundary, the simple connectedness follows from a topological argument given in the supplementary material. To compute the outer boundary, as $P$ takes a sequence of vectors to the cyclic product of pairwise inner products, if any of the input vectors is not coplanar with its neighbors, then by projecting it onto the plane spanned by its neighbors and extending it to be unit norm, we can increase the magnitude of the corresponding factors without changing any directions (shown algebraically in the supplementary material), so it suffices to consider the case when all vectors are coplanar, so it suffices to consider $\mathbb{C}^2$. In this case, $P$ is a (real-)smooth map the (real) manifold $(S^1)^n\subset\mathbb{C}^n$ to the (real) manifold $\mathbb{C}$, so any input that gets sent to a boundary point of $P$ has singular Jacobian. That is, the directional derivative in all directions tangent to the domain must be parallel. It turns out that these algebraic conditions can be algebraically manipulated to characterize all critical points and critical values of $P$, and the computation is made simpler by using quaternions.
Reference to $\Gamma_k$: We will remove the reference to $\Gamma_k$ in the proof of Lemma 4 in the main paper by changing the statement to exclude 0 (where the statement is false, but this doesn't affect the rest of the paper).
Reliance on Reference: We will add a re-statement of Theorem 11 from [Evr+22] and some exposition around this result. In addition, in light of our response to Reviewer eHkq titled ``Precedent for Problem'', we plan to also add more exposition and motivation, alleviating the heavy leaning on the [Evr+22] reference.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarification and the proposed modifications. | Rebuttal 1:
Rebuttal: We thank the reviewers for their feedback. The following are our responses to the questions and concerns raised by reviewers.
We will move the proofs of the main lemmas and theorems (through Theorem 4) to an appendix. This will leave room for additional discussion of the connection to previous forgetting bounds, as well as a less technical exposition of our techniques.
Please see the individual responses for more details. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Risk-Averse Model Uncertainty for Distributionally Robust Safe Reinforcement Learning | Accept (poster) | Summary: The authors consider a distribution over transition models and tackle the safe RL problem by applying a risk-averse perspective towards model uncertainty through coherent distortion risk measures. The proposed formulation can ease the burden of solving a min-max problem, which is often encountered in many worst-case safe RL algorithms.
The authors also theoretically show that their formulation is equivalent to a specific class of distributionally robust safe RL problems.
Strengths: - This work proposes a new formulation of safe RL, by considering a distribution of transition models and applying the distortion risk measure toward the model uncertainty, which circumvents the burden of solving min-max problems.
- This work theoretically proposes that the reformulated problem is equivalent to a specific class of distributionally robust safe RL problems.
Weaknesses: - It is better to include the proof of Lemma 2 in Appendix B.2 for self-contained. Also, explaining the results of Lines 225-226 is better.
- It seems that the performance of the proposed method can not surpass existing methods, especially adversarial RL. Also, the experiments presented are not sufficient.
- Although applying the distortion risk measure $\rho$ seems to be promising, I'm wondering about the necessity of doing so since it introduces more complex computation processes and seems to contribute little to the experiment results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What's the choice of the function $g$ (line 224-225) in your experiments?
- In lines 229-230, is there only one Q function for calculating all sampled transitions?
- In Table 1, what’s the meaning of the bold numbers? I think you should highlight the best results instead of yours.
- Also, please explain the necessity and merits of adopting the formulation proposed in this paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. Please see below for detailed responses to your questions. In particular, we highlight the main benefits of our RAMU framework and the key takeaways of our experimental results. We hope that these responses address your main concerns, and we ask that you please consider updating your review scores to reflect these clarifications.
### [W3, Q4] Main benefits of our RAMU framework
There are several benefits of our proposed RAMU framework compared to existing methods for addressing model uncertainty in deep RL (such as adversarial RL and domain randomization). The main benefits of our RAMU framework include:
1. Our RAMU framework has robustness guarantees (see Theorem 1). This is not true of popular methods such as domain randomization that apply the expectation operator over distributions of transition models.
2. Our RAMU framework achieves this robustness without requiring complex minimax optimization. This is not true of robust RL (and distributionally robust RL) methods, which must solve for worst-case transition models (or distributions over transition models) throughout training. Our approach, on the other hand, only requires weighted sample averages that can be computed very efficiently (see lines 220-235).
3. Our RAMU framework can be implemented using standard data collection from a single training environment (see lines 236-263), without requiring potentially dangerous adversarial interventions (as in adversarial RL) or detailed simulator access (as in domain randomization). Therefore, unlike these existing methods, our approach can be applied in settings that require real-world data collection for training.
### [W2, Q3] Key takeaways from experimental results
* **[W2] Key takeaways:** We believe that our experimental results provide strong support for the benefits of our RAMU framework, and we include thorough comparisons against the most popular methods for addressing model uncertainty in deep RL. The experiments demonstrate the following key points:
1. Our RAMU framework achieves significant robustness and safety benefits compared to standard safe RL, while using exactly the same data collection process from a single training environment.
2. Our use of coherent distortion risk measures leads to robustness and safety benefits compared to a risk-neutral approach based on expectations, as shown by the comparison of RAMU (Wang 0.75) to the special case of RAMU (Expectation). Our risk-averse implementation achieves higher average rewards (1.08 vs. 1.05) and better safety constraint satisfaction (80% vs. 74%).
3. Our RAMU framework achieves similar or improved performance compared to the most popular baselines for incorporating model uncertainty in deep RL (domain randomization and adversarial RL). We accomplish this without requiring additional assumptions on the training process, such as detailed simulator access (as in domain randomization) or potentially dangerous adversarial interventions (as in adversarial RL), that are not always suitable in real-world settings.
* **[W2] Comparison to adversarial RL:** Our RAMU framework achieves similar results to adversarial RL, and has significant benefits compared to adversarial RL in terms of implementation. RAMU achieves higher average rewards at test time in 3 out of 5 tasks (and higher average rewards in aggregate: 1.08 vs. 1.05), and better safety constraint satisfaction at test time in 3 out of 5 tasks (and similar safety constraint satisfaction in aggregate: 80% vs. 82%). Unlike adversarial RL, RAMU accomplishes this in a way that (i) does not alter the data collection process, (ii) does not require training an adversary in a minimax formulation, and (iii) does not require different implementations during training and testing. These all represent meaningful drawbacks of adversarial RL, which make adversarial RL unsuitable for training in many real-world tasks.
* **[Q3] Table 1:** We bold the risk-averse implementation of our RAMU framework to highlight the relevant version of our approach for the reader to focus on.
### [Q1, Q2] Implementation details
> [Q1] What's the choice of the function $g$ (line 224-225) in your experiments?
In our experiments, we consider the Wang transform with $\eta = 0.75$ as well as the special (risk-neutral) case of the expectation operator (see lines 302-306). The function $g$ corresponding to these are shown in Figure 7 in Appendix C. The form of $g$ for the Wang transform is also in line 103, and the expectation operator corresponds to a linear $g$.
> [Q2] In lines 229-230, is there only one Q function for calculating all sampled transitions?
Yes, there is one RAMU cost Q function (and one RAMU reward Q function). As shown in the critic loss functions (lines 215-216), we are interested in calculating the estimates in (7) with the current target Q function. Therefore, for each sampled transition model, we apply the standard cost Bellman target in lines 229-230 using this single Q function. Then, we combine these estimates using the weighted sample average in (7).
---
Rebuttal 2:
Title: Are you satisfied by the answers?
Comment: Dear reviewer,
Would you please indicate whether the authors' response is satisfactory for you? If not, please engage with the authors, so we can get a better assessment of this work.
Thank you,
Area Chair
---
Rebuttal Comment 2.1:
Comment: I am following up on this, especially given that your review is currently the most critical one. Do you find the authors response convincing or do you have a serious issue against this paper?
Thank you,
Area chair | Summary: This paper proposes a methodology for distributionally robust RL via the use of risk measures and leveraging risk (Fenchel) duality, dealing with what they call model uncertainty. The paper introduces the RAMU Q function and Bellman operators respectively, which are based on modifying the standard risk-neutral operators by inducing risk-awareness over the choice of the transition model. The RAMU Q function and respective Bellman operators exhibit nice properties, due to the class of risk measures considered (distortion risk measures).
Then, the paper describes a model-free implementation of the proposed approach with a single training environment, leveraging interesting results for statistical estimation of risk functionals (specifically distortion risk measures). Lastly, the proposed approach is verified on a number of numerical benchmarks.
Strengths: Overall, I think this is a paper with potential, and it was a pleasure reading it. The RAMU framework is indeed interesting and seems to be effective, although I have some concerns (see below).
The estimation of the Bellman operators in Section 6 within the RAMU framework can indeed be computed efficiently, as the authors point out, which is an advantage of the approach. The application of the results of Jones and Zitikis is quite interesting.
The experimental section is quite detailed and the comparisons with existing methods is well-thought, especially with a large number of similar alternative approaches. Also the experiments bring out the effect and efficacy of risk-averse RL well.
Weaknesses: Risk duality and equivalence to minimax (distributionally robust) optimization (under certain assumptions) is very well known and established (in essence it is just Fenchel duality). Therefore, I do not think it can be claimed as novelty or contribution of this paper by itself (e.g., in lines 5 to 7 in the abstract). I would suggest that the authors rephrase to highlight more specifically their contributions.
In the discussion about risk measures, I do not see any explanation on *why* one should choose to work with distortion risk measures or coherent risk measures, and why the axioms proposed by Majumdar and Pavone are appropriate. Axioms are subjective, so in principle there is no universal reason to use one set of axioms over another set of axioms. Of course distortion risk measures exhibit amazing analytical properties and probably this is among the main reasons justifying the authors chosing this class, but the discussion and point needs to be made.
The RAMU cost function as defined in line 156 is introduced somewhat arbitrarily, in the sense that it is unclear if this is somewhat related to a base problem such as (2). While I understand the motivation and indeed it *seems* to make sense, usually one would start from such a base problem, possibly risk-averse in a certain way that makes intuitive and operational sense, and then build their way towards Q factors and Bellman optimality conditions.
Theorem 1 follows directly from the application of risk duality. I am not sure if this result "deserves" to be a theorem. I would think that a stating as a proposition would be more suiting. As a general comment, the theory in this paper is somewhat limited, however I have enjoyed reading the paper still.
The discussion under "Generative distribution of transition models", line 236 onwards is quite convoluted and involved without particular reason in my opinion. While I finally understood what happens (I think!), I believe that the authors should make an effort to write this part in a much clearer manner.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What is a "distribution over transition models" in lines 41-42? This is very ambiguous at this point.
Also, related in Lines 72-74: Defining this product is somewhat obscure. For instance, does this make sense when S and A are uncountable sets (or you implicitly assume finite states and actions throughout)?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. We are glad that you enjoyed reading it, and found our RAMU framework to be interesting and effective. We also appreciate your thoughtful suggestions, which will help to improve our paper. Please see below for responses to all of your questions and comments, which we hope address your main concerns. If so, we ask that you please consider updating your review scores to reflect our responses.
### [W1, W4] Theoretical contributions / novelty
* Our RAMU framework represents a novel contribution to the RL literature, and we provide important theoretical results for the corresponding RAMU Bellman operators and Q functions that support the use of this framework. We provide theoretical connections to distributionally robust RL (Theorem 1), and we prove contraction properties for the RAMU Bellman operators (Corollary 2) that provide theoretical support for training the RAMU Q functions via standard temporal difference methods. It was not trivial to construct a novel RL framework for addressing model uncertainty with these theoretical properties and an efficient implementation.
* We agree that the general connection between coherent risk measures and robustness is known (we include this known result in Appendix B.2). However, we apply this known result to establish an equivalence between our RAMU framework and a class of distributionally robust safe RL problems, which is a novel and important result. By establishing this connection, we are able to efficiently address model uncertainty in a deep RL context through the use of risk measures. We will update the paper to make this contribution clear, and we will include the distributionally robust safe RL problem definition to which our RAMU problem is equivalent.
### [W2] Choice of coherent distortion risk measures
* In our RAMU framework, we leverage properties of **coherent** risk measures to provide robustness guarantees, and we leverage properties of **distortion** risk measures to provide an efficient, model-free implementation based on weighted sample averages that does not involve minimax optimization. We will add this commentary to the paper to make our choice of coherent distortion risk measures more clear.
* Please see Majumdar and Pavone (2020) for a detailed discussion on why each of the properties of coherent distortion risk measures is important in the context of robotics. In general, coherent risk measures and distortion risk measures are two of the most popular classes of risk measures used in the literature, so there is a consensus that the characteristics of these classes are desirable.
### [W3, W5, Q1, Q2] Other clarifications
> [W3] The RAMU cost function as defined in line 156 is introduced somewhat arbitrarily…usually one would start from a base problem…
We will update Section 4 to start from the formal problem definition that corresponds to the RAMU update in (4). The objective and constraint in this formulation are very similar to the corresponding Q functions defined in line 156, just averaged over initial states and actions.
> [W5] The discussion under "Generative distribution of transition models", line 236 onwards is quite convoluted and involved…
The main goal of this section is to describe how we can define a distribution $\mu$ over transition models in a way that can be implemented using only data collected from a single training environment. We accomplish this by constructing samples from perturbed versions of the training environment. We will update this section to make this goal clear.
> [Q1] What is a “distribution over transition models” in lines 41-42? This is very ambiguous at this point.
“Distribution over transition models” in lines 41-42 refers to $\mu$, the same concept of a distribution over potential environments that was introduced in the previous paragraph (lines 34-35). We will update the language to make this more clear at this stage of the paper.
> [Q2] …in Lines 72-74: Defining this product is somewhat obscure…
This structure in lines 72-74 is known as rectangularity. It is a very common assumption throughout the robust RL literature (see line 75 for references), and we use standard notation from the literature in lines 72-74 to describe this structure. Rectangularity simply implies that the model uncertainty $\mu_{s,a}$ at every state-action pair is independent from other state-action pairs. In general, this is a conservative assumption, but it allows for recursive definitions of Q functions.
---
Rebuttal 2:
Title: Are you satisfied by the answers?
Comment: Dear reviewer,
Would you please indicate whether the authors' response is satisfactory for you? If not, please engage with the authors, so we can get a better assessment of this work.
Thank you,
Area Chair
---
Rebuttal Comment 2.1:
Comment: I would like to thank the authors for taking the the time and providing detailed responses to my comments. I am mostly satisfied with the answers provided, except possibly with the point regarding using coherent or risk measures in [W2]. While the first bullet is solid, since such classes of risk measures have exceptional properties, the second bullet is highly subjective (for instance, there is a huge debate in the finance literature about the axioms comprising the class of coherent risk measures, and there is absolutely no consensus. It happens that such nice classes of risk measures are used in academic research, because of analytical tractability, which again is a very solid reason why to use them).
For now, I will keep my score unchanged, but I am leaning favorably towards this paper. | Summary: This paper presents a Temporal Difference (TD) learning method for addressing the ``Risk-Averse Model Uncertainty for Distributionally Robust Safe Reinforcement Learning'' problem. Specifically, the authors consider a Constrained Markov Decision Process (CMDP) combined with Bayesian uncertainty sets. They are trying to attain a policy which is both robust and safe; Meaning that it optimizes a nested risk measure of discounted return, while satisfying certain guarantees over discounted cost. The proposed method is implemented and evaluated by comparing its performance against adversarial RL, Safe RL, and domain randomization approaches across multiple environments.
Strengths: - This paper addresses a noteworthy problem. As a general matter, I believe that risk-aware policy selection methods for Bayesian MDP, as also studied in [1, 2, 3] is an interesting and open area of research.
- Even though the problem is notation-heavy, the paper is well presented and maintains a high level of readability.
- The authors have proven their implementation's effectiveness across five different tasks. Furthermore, the authors' utilization of a large number of training samples adds credibility to the algorithm's performance evaluation.
[1] Giorgio Angelotti, Nicolas Drougard, & Caroline Ponzoni Carvalho Chanel. (2023). An Offline Risk-aware Policy Selection Method for Bayesian Markov Decision Processes. arXiv preprint arXiv:2105.13431.
[2] Lobo, E. A., Ghavamzadeh, M., & Petrik, M. (2020). Soft-robust algorithms for batch reinforcement learning. arXiv preprint arXiv:2011.14495.
[3] Petrik, M., & Russel, R. H. (2019). Beyond confidence regions: Tight Bayesian ambiguity sets for robust MDPs. Advances in neural information processing systems, 32.
Weaknesses: - **Theoretical results**: The primary focus of this paper is the implementation and experimental evaluation; there are limited theoretical results in this paper.
- **Comparisons**: In the ``strengths'' section of my review, I mentioned three risk-aware policy selection methods for Bayesian MDP methods that I am aware of [1, 2, 3]. These methods have similar objectives, albeit having two key differences (i) Instead of CMDPs the objective is for normal MDP and does not require safety guarantees (ii) the risk measure is (might be) applied over entire stochasticy $\mu$ rather than recursively over each $\mu_{s, a}$. As both algorithms use samples of models to have a Monto Carlo estimation of the risk measure, the $\theta$ update in Algorithm 1 is particularly similar to the $\operatorname{RiskEvaluation}$ operation introduced by [1].
- Firstly, I would suggest that the authors include these papers in the ``uncertainty in reinforcement learning'' part of the related works.
- Secondly, it would have been more informative if the paper compared its proposed method to one of these existing methods, instead of focusing on comparisons with adversarial RL and domain randomization.
- **Presentation**: Theorem 1 requires a more concrete and explicit formulation. $\zeta_{a, s}$ is introduced as ``depending on $\rho^+$ and $\rho$''; without properly defining the dual presentation of coherent risk measures beforehand. This lack of clarity makes it very challenging for non-expert readers to understand the paper.
In conclusion, while the paper proposes an interesting problem and provides valuable experimental results, there is a lot of room for improvement. Thus, I rate this paper as a borderline accept.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - **The Objective**: It would be interesting to observe the performance of the proposed algorithm by varying the learning rates for $\theta_r$ and $\theta_c$ and exploring different risk measures for $\rho^+$ and $\rho$. For instance imagine we set $\rho^+$ as $CVaR_{\alpha_r}^+$, and $\rho$ as $CVaR_{\alpha_c}$. As $\alpha_c$ decreases, the algorithm becomes more strict on violating safety conditions, and as $\alpha_r$ decreases, the algorithm becomes more robust. I am curious to see the differences in experimental outcomes under such settings.
- **Evaluation**: In the experiments, different algorithms are being compared based on the safety percentage, and total rewards. I would like to see their comparison based on (an estimated version of) the Lagrangian form of the actual objective (4) (while fixing the set of hyperparameters for all methods). Since the proposed algorithm directly aims to optimize this objective, it is reasonable to expect it to outperform other methods in terms of this specific evaluation metric. The conduct of such an evaluation can serve as a sanity check.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: no limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. We are glad you agree this is an important problem, and we appreciate your comments on our paper’s clear presentation and strong experimental results. Please see below for responses to your questions. We hope that these clarifications address your main concerns. If so, we ask that you please consider updating your review scores to reflect our responses.
### [W1, W3] Theoretical results / presentation
* **[W1] Theoretical results:** Our RAMU framework represents a novel contribution to the RL literature, and we provide important theoretical results for the corresponding RAMU Bellman operators and Q functions that support the use of this framework. We provide theoretical connections to distributionally robust RL (Theorem 1), and we prove contraction properties for the RAMU Bellman operators (Corollary 2) that provide theoretical support for training the RAMU Q functions via standard temporal difference methods. It was not trivial to construct a novel RL framework for addressing model uncertainty with these theoretical properties and an efficient implementation.
* **[W3] Presentation:** Due to space constraints, we present Theorem 1 with the necessary details to provide an intuitive understanding of the result, and we defer the full formal treatment of the result to the Appendix. Appendix B.2 formally defines the appropriate probability space and provides the dual representation result that we use (including reference to the appropriate dual space). We will update Theorem 1 in the main text for additional clarity.
### [W2] Comparison of RAMU to other methods
* **Experiment baselines:** In our experiments, we compare against the most popular methods for robustness to model uncertainty in deep RL. Adversarial RL represents a common implementation of robust RL in the deep RL setting, so we use a popular action-robust adversarial RL method as a baseline in our experiments. Domain randomization is the most popular implementation based on distributions of transition models, so we also consider this as a baseline.
* **Comparison to [1, 2, 3]:** We agree that [1, 2, 3] are interesting works that consider model uncertainty in RL. However, they are not well suited as baselines in the deep RL setting that is the focus of this paper. The methods in [1, 2, 3] have several key differences compared to the setting that we consider in this paper. Most importantly, they are offline RL methods (vs. online RL in this paper) that are designed for tabular settings (vs. deep RL in this paper). Please also note that we do not focus on a Bayesian setting in this work (however, by choosing $\mu$ to represent a posterior, our framework can be applied in a Bayesian setting).
### [Q1, Q2] Experimental analysis
* **[Q1]:** The comparison between RAMU (Wang 0.75) and RAMU (Expectation) represents an example of the analysis you suggest, as the expectation operator is equivalent to setting the Wang hyperparameter to zero. An alternative way to change the level of robustness is to vary the hyperparameter $\epsilon$ that defines our distribution $\mu$. We include these results in Appendix C (Figure 6), which demonstrates the trends you have mentioned. We expect that varying a risk measure hyperparameter would lead to similar trends (see comparison between RAMU with Wang 0.75 vs. Expectation).
* **[Q2]:** The goal of safe RL is to maximize rewards while satisfying the safety constraint. Therefore, we directly measure these two key quantities as our metrics of interest in our experiments. We have also included the total rewards and total costs for every algorithm and test environment in Appendix C (Figures 3, 4, 5). Note that a Lagrangian relaxation would be one way to solve the safe RL problem, but we do not consider this approach in our experiments (we use CRPO; see lines 654-658 for details).
---
Rebuttal Comment 1.1:
Title: Response
Comment: The authors have responded to most of my empirical concerns. Therefore, I will increase my rating from 5 to 6.
Regarding Q1: First it was only a suggestion, but comparison between RAMU (Wang 0.75) and RAMU (Expectation) does not cover what I mentioned there. It is only for 2 risk measures, and also does not investigate different $\alpha_c$ and $\alpha_r$.
---
Rebuttal 2:
Title: Are you satisfied by the answers?
Comment: Dear reviewer,
Would you please indicate whether the authors' response is satisfactory for you? If not, please engage with the authors, so we can get a better assessment of this work.
Thank you,
Area Chair | Summary: The paper introduces a deep reinforcement learning framework for safe decision-making in uncertain environments. The authors propose a risk-averse approach towards model uncertainty using coherent distortion risk measures. They provide robustness guarantees for the framework by showing its equivalence to a distributionally robust safe reinforcement learning problem. The framework is efficient and model-free, utilizing standard data collection from a single training environment. Experiments on continuous control tasks with safety constraints demonstrate the framework's robust and safe performance across a range of perturbed test environments.
Strengths: [+] The paper is logically clear and well-written, making it easy to follow.
[+] The paper starts from theory, first defining a new Q-function and a new Bellman operator, then proving the equivalence between the new Bellman operator and distributionally robust Bellman operators with respective ambiguity sets, thereby verifying the rationality of the definition and the contraction property of the new Bellman operator. This provides a theoretical basis for the algorithm design.
[?] What is the relationship between safety and robustness? The paper theoretically proves that the newly proposed RAMU Bellman operator has robustness guarantees. However, it seems not to mention much about its relationship with Constrained Markov Decision Process (CMDP) and how to ensure that the point converged to by the RAMU Bellman operator is safe.
[?] Related to the previous question, why consider both safety and robustness here? It seems that considering both safety and robustness is crucial for the Theorem 1. Is it necessary to clarify this point more explicitly in the abstract? For example, the current abstract's first sentence emphasizes the importance of safety, the line 5 separately emphasizes robustness guarantees, and the line 7 re-emphasizes robustness, which can be somewhat confusing.
Weaknesses: see above
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see above
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. We are glad you found the paper to be clear and well-written, and appreciated the theoretical support for our proposed framework. Please see below for responses to your questions, which clarify the importance of both safety and robustness. If these clarifications address your concerns, we ask that you please consider updating your overall review score to reflect this.
### [Q1, Q2] Importance of safety and robustness
* Safety is often a prerequisite for real-world decision making applications, so we consider a safe RL setting with a Constrained MDP (CMDP) as our starting point. However, safe RL with a CMDP finds a policy that is only safe in a single training environment, with no robustness guarantees related to performance and safety in other environments. In many real-world scenarios, the environment at deployment time may be different from the training environment due to factors such as modeling errors or unknown disturbances.
* We incorporate robustness to model uncertainty in both the objective (rewards) and safety constraint (costs) of a CMDP. We accomplish this by learning separate RAMU Q functions for the reward and cost, which both appear in our RAMU update in (4). As demonstrated in our experimental results, this update leads to policies that (i) achieve robust performance (due to the use of our RAMU reward Q function in the objective) and (ii) remain safe across a range of test environments (due to the use of our RAMU cost Q function in the safety constraint). Our RAMU framework significantly outperforms the standard safe RL baseline that applies the update in (2), which uses standard Q functions that only consider a single training environment $p$ and do not incorporate robustness.
---
Rebuttal 2:
Title: Are you satisfied by the answers?
Comment: Dear reviewer,
Would you please indicate whether the authors' response is satisfactory for you? If not, please engage with the authors, so we can get a better assessment of this work.
Thank you,
Area Chair
---
Rebuttal Comment 2.1:
Comment: Following up on this! | Rebuttal 1:
Rebuttal: Thank you to all of the reviewers for their thoughtful feedback. We are excited to see the reviewers agree that the paper is clear and well-written (ZX7b, mRCw, 22QQ), proposes a novel framework with a practical and efficient implementation (F3rP, 22QQ, JpRL), and provides strong theoretical (ZX7b, JpRL) and experimental (mRCw, 22QQ) support for this framework. We have replied directly to each reviewer with detailed responses, and we will update the paper to incorporate clarifications based on reviewer suggestions. If we have addressed your main concerns, we ask that you please consider updating your review scores to reflect our responses. Thank you for helping us to improve our paper! | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper introduces the Risk-Averse Model Uncertainty (RAMU) framework for safe reinforcement learning in uncertain environments. RAMU incorporates a distribution of transition models and applies a risk-averse perspective using coherent distortion risk measures. The framework offers an efficient, model-free implementation through one single training environment. Experimental results demonstrate the framework's ability to produce robust, safe performance in perturbed test environments. Unlike existing distributional robust (DR) approaches, RAMU eliminates the need for minimax optimization.
Strengths: - The proposed perturbation function models the distribution of environment transition models, providing a foundation for practical implementation of distributionally robust algorithms.
- Compared to existing methods, the RAMU framework is implemented efficiently. It avoids complex minimax optimization, which is the major obstacle to applying Distributional Robustness methods to DRL.
Weaknesses: - The problem addressed in this paper is the handling of model uncertainty in safe reinforcement learning (RL) scenarios. However, it appears that the issue of model uncertainty in safe RL is similar to the problem in standard RL. Since the proposed RAMU method is not specifically designed for safe RL, it would be more convincing if it were compared to existing distributionally robust methods such as those discussed in [1] and [2].
- The formulation of the distributionally robust safety problem in the article lacks clarity. The problem definition starts with directly modifying the Q function to the DR Q function in Eq4, which is a shortcut approach. It would have been more appropriate to first define the problem and then derive the suitable form of the Q function. Although solving the Eq4 definition is straightforward by plugging in the safety RL method with DR RL, it is important to note that the worst-case transitions associated with the reward and cost (the $\beta$ in Eq5 and Eq6) are not the same. However, in reality, there is only one transition, which makes the proposed method more conservative in practice.
- The transition function f(s, s') mentioned in line 257 may only be effective for tasks involving robot control, where dynamics follow linear patterns and such perturbations are effective. For tasks involving image inputs, it remains unclear what kind of function f should be used.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - I understand that the contribution of the paper lies in proposing a distributionally robust (DR) safe reinforcement learning method that does not require a minmax operation. However, given that there is no direct connection between DR algorithm design and safety, it is essential to compare it with existing DR methods such as [1] and [2].
- The proposed algorithms does have similarities to policy smoothing algorithms, as mentioned in [3] and [4]. It is plausible to consider that combining policy smoothing with adversarial RL could potentially yield better results compared to the RAMU framework. In other words, exploring the combination of observational robustness and action robustness may provide improvements over predefined model robustness?
Ref.
[1] Robust Reinforcement Learning using Offline Data
[2] Distributionally Robust Q-Learning
[3] Deep Reinforcement Learning with Robust and Smooth Policy
[4] Policy Smoothing for Provably Robust reinforcement learning.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: As mentioned in the Conclusion section of the paper, the choice of the model distribution µ and risk measure ρ in the RAMU framework is user-defined, and RAMU framework only addresses robustness with respect to model uncertainty and safety defined by expected total cost constraints. Crruently, this paper only consider the dynamics of model transition follow linear patterns, which may not to be suitable for tasks that involve image inputs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. Please see below for clarifications and responses to your questions. In particular, we clarify the definition of “distributionally robust RL” [a, b] considered in our theoretical results, and how this differs from other definitions that have appeared in the literature more recently. We hope that our responses address your main concerns, and we ask that you please consider updating your review scores to reflect these clarifications.
### [W1, Q1, Q2] Distributionally robust RL and comparison to other methods
* **Definition of distributionally robust RL:** We consider a distribution $\mu$ over transition models in our work, and show in Theorem 1 that our RAMU framework is equivalent to a class of distributionally robust RL problems as defined by [a, b]. Distributionally robust RL [a, b] considers ambiguity sets of *distributions over transition models* in $P(\mathcal{M})$ (see lines 79-81), and is different from robust RL [c, d] that applies uncertainty sets directly over transition models in $\mathcal{M}$.
* **[W1, Q1] Comparison to [1] and [2]:** Please note that both [1] and [2] consider the typical robust RL setting [c, d] based on uncertainty sets of transition models. This is different from our approach, which considers a distribution over transition models and is equivalent to a class of distributionally robust RL problems as defined by [a, b]. In recent years, researchers have started using “robust” and “distributionally robust” interchangeably to mean robust RL [c, d], which has caused some confusion. Also different from our approach, [1] considers the offline RL setting and [2] focuses on the tabular RL case.
* **[Q2] Comparison to [3] and [4]:** We agree that the idea of observational robustness considered in [3] and [4] is an important area of research, but it is not the focus of this work. We focus on being robust to uncertainty in the transition model (i.e., dynamics), and these two sources of uncertainty require different analysis and algorithms. Please note that [3] and [4] do not make any direct connections to the definition of distributionally robust RL [a, b] considered in this work. Action robustness is more closely related to uncertainty in dynamics because it leads to changes in state transitions, which has led to its use as a robust RL [c, d] method in deep RL settings.
* **Experiment baselines:** We compare against the most popular methods for robustness to model uncertainty in deep RL. Adversarial RL represents a common implementation of robust RL [c, d] in the deep RL setting, so we use a popular action-robust adversarial RL method as a baseline in our experiments. Domain randomization is the most popular implementation based on distributions of transition models, so we also consider this as a baseline.
### [W2] Problem formulation
* We will update Section 4 to start from the formal problem definition that corresponds to the RAMU update in (4). The objective and constraint in this formulation are very similar to the corresponding Q functions defined in line 156, just averaged over initial states and actions. We will also make clear the distributionally robust safe RL problem definition to which our RAMU problem is equivalent, which involves ambiguity sets of distributions over transition models (see lines 79-81).
* As you pointed out, the worst-case distributions $\beta$ over transition models will be different for rewards and costs in our formulation. Because we do not know the true environment at test time, it makes sense to take this conservative approach in order to guarantee robustness in both the rewards and costs at deployment time. This is a common approach when considering robustness in a safe RL setting [e].
### [W3] Extension to image inputs
As an example that makes sense for our experiments, we consider an intuitive perturbation function based on percentage changes in each dimension of state transitions. However, our methodology works with any choice of distribution $\mu$ over transition models (or equivalently, any choice of perturbation function $f_x$), which is defined by the user to best suit the application. In RL from images, it is common to consider an MDP in a latent representation space, and we can apply our methodology in this latent space. Alternatively, in scenarios where detailed simulator access is available, it would also be possible to generate next state images from multiple transition models by leveraging this simulator.
**References:**
[a] H. Xu and S. Mannor. Distributionally robust Markov decision processes. In Advances in Neural Information Processing Systems, volume 23. Curran Associates, Inc., 2010.
[b] P. Yu and H. Xu. Distributionally robust counterpart in Markov decision processes. IEEE Transactions on Automatic Control, 61(9):2538–2543, 2016.
[c] G. N. Iyengar. Robust dynamic programming. Mathematics of Operations Research, 30(2): 257–280, 2005.
[d] A. Nilim and L. E. Ghaoui. Robust control of Markov decision processes with uncertain transition matrices. Operations Research, 53(5):780–798, 2005.
[e] D. J. Mankowitz, D. A. Calian, R. Jeong, C. Paduraru, N. Heess, S. Dathathri, M. Riedmiller, and T. Mann. Robust constrained reinforcement learning for continuous control with model misspecification. arXiv preprint, 2021. arXiv:2010.10644.
---
Rebuttal Comment 1.1:
Comment: I believe the core contribution of this article lies in providing a simple and effective robust method to address model uncertainty in MDPs. However, the method itself does not directly address the safety aspect or the CMDP problem. The article needs to clarify the connection between safety and robustness. While the article only addresses the robustness issue, the overall background of the article is strongly tied to the concept of safety, which seems strange to me. Additionally, there is significant room for improvement in the writing of this article. However, I would like to increase the score to 5 fot the good idea of distribution over transition models.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response! We agree that the core contribution of our paper is a simple and effective method for incorporating robustness to model uncertainty in a deep RL setting.
We consider safe RL modeled by a Constrained MDP (CMDP) as our starting point, as safety is often a prerequisite for real-world decision making applications. We incorporate robustness to model uncertainty in the safe RL setting by applying our RAMU framework to both the objective (rewards) and safety constraint (costs) of a CMDP. As demonstrated in our experimental results, this leads to policies that achieve robust performance *and* robust safety across test environments (i.e., robustness in both components of a CMDP). If safety is not relevant in an application, our framework could also be applied to provide robustness in a standard MDP, which would result in a special case of the update in (4) without the safety constraint.
In the updated version of our paper, we will clarify how our problem formulation leads to robust performance *and* robust safety constraint satisfaction in a CMDP. Thank you for helping us to improve our paper. | null | null | null | null | null | null |
STORM: Efficient Stochastic Transformer based World Models for Reinforcement Learning | Accept (poster) | Summary: The paper introduces the Stochastic Transformer-based wORld Model (STORM), an efficient world model architecture. STORM proposes to encode image inputs using a stochastic variational autoencoder, and predicts latent state using a GPT-like sequential model. It then trains dynamics and policy based on the outputs of these stochastic variational autoencoder and sequential model. The authors conducted experimental comparisons with some classic baseline methods in the Atari100k environment. The results indicate that STORM has a faster speed and achieves better performance.
Strengths: 1. Although the individual components proposed by STORM have been introduced before, the authors designed a solid structure to combine these parts into an effective method.
2. The experimental results demonstrate the effectiveness of STORM, and more importantly, they improve the time efficiency of Model-Based Reinforcement Learning methods.
Weaknesses: 1. In the experimental section, please add a comparison with Speedy Zero [1]. Speedy Zero is also a model-based RL method which is proposed recently and has also achieved good time efficiency and performance on Atari.
2. Beyond Atari, can STORM be applied to other tasks, such as MuJoCo, DMC, MetaWorld, etc.? Dreamerv3 can achieve very good results in a wide range of different environments. If STORM cannot, the practical significance of this paper will be greatly reduced.
3. It would be better if further analysis could be provided on the benefits of using a stochastic variational autoencoder.
Reference:
[1]. Mei et.al. "SpeedyZero: Mastering Atari with Limited Data and Time." In ICLR 2023 https://openreview.net/forum?id=Mg5CLXZgvLJ.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The generalizability of STORM could be further studied.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your effort spent reviewing our paper and providing many valuable suggestions. We will include the suggestions and the pointed concurrent work in the revised paper. Below, we want to address the main concerns raised in the review.
- Response to **(1)**: We appreciate your observation regarding the missing paper, which we will include in the related work section **(Line 117)** rather than the experiments part for the reasons we elaborate next. As stated in **Section 2 (Lines 117-123)** and **Section 4.1 (Lines 221-226)**, the design and experimental validation of STORM have mainly followed the approach of previous works (DreamerV3, IRIS, TWM). Similarly to these works, we do not directly compare our results with **lookahead search based methods** such as MuZero, EfficientZero, and SpeedyZero, as our primary goal is to refine the world model itself. Nonetheless, lookahead search techniques can be combined with our method in the future to further enhance the agent's performance.
- Response to **(2)**: Thanks for your suggestion on carrying out additional experiments using other benchmarks, which is interesting and indeed can further strengthen the contribution of the paper. Nonetheless, as explained in our **Response 1** in the **Author Rebuttal** above, the additional computing resources and time required for training and validation of STORM do not seem a feasible plan at present. We will explore it in the future. Please kindly refer to our **Response 1** in the **Author Rebuttal** for more details.
- Response to **(3)**: The use of stochastic representation is prevalent in reconstruction-based model-based RL algorithms, including STORM, Dreamer, TWM, and SimPLe. In our early experiments, we found that employing deterministic features to represent the observations (using e.g., a vanilla autoencoder) and replacing the KLDiv dynamics loss with L2 loss results in a significant inconsistency between the reconstructed environment and the original environment after a few steps. In contrast, a stochastic representation maintains stable reconstruction over a larger extended period (**Figure 7** in **Appendix F**). This phenomenon has also been observed by SimPLe [1] (in their Appendix A, ablations on models). Certainly, it should be remarked that this approach has limitations, for instance, contrastive learning methods, which have been proven effective for RL in EfficientZero/CURL, cannot be directly applied to STORM.
---
**References**
[1] Kaiser, Lukasz, et al. "Model-based reinforcement learning for atari." *arXiv preprint arXiv:1903.00374* (2019).
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's response. Due to the lack of experiments in more environments, it's hard for me to judge the generalization ability of STORM. Therefore, I will maintain my score.
---
Reply to Comment 1.1.1:
Comment: We extend our appreciation for your careful consideration of and feedback on our rebuttal. However, we firmly posit that the Atari100k benchmark is apt for assessing STORM's generalization capacity.
1) It's noteworthy that a multitude of antecedent works, including IRIS, TWM, SimPLe (our baselines), EfficientZero, and SpeedyZero, have all elected to conduct their empirical investigations leveraging the Atari100k benchmark only. We posit that this historical precedent not only bolsters the credibility of the benchmark but also underscores its relevance to our present study. We believe that the utilization of this benchmark should not be deemed a glaring limitation of our current endeavor.
2) The support bestowed upon DreamerV3 by DeepMind affords them a resource advantage that surpasses the norm for most researchers. In contrast, our present circumstances constrain our capacity to conduct a broader spectrum of experiments—a fact that we acknowledge with a measure of regret.
As highlighted in our `Response to Weakness 2` addressed to `Reviewer rfCq`, the phenomenon of two algorithms achieving identical global human mean/median scores while excelling in distinct environments is a common occurrence. STORM, in this context, emerges as a potential catalyst. Its demonstrated ability to excel in specific environments, as detailed in Section 4.2, adds a distinctive reference option for practitioners. This novel dimension, we believe, can enrich practical applications.
We extend our gratitude for dedicating your time to our rebuttal again. We will appreciate it if you could reconsider assessing the paper.
---
Rebuttal 2:
Comment: Dear Reviewer,
The author has posted their rebuttal, but you have not yet posted your response. Please post your thoughts after reading the rebuttal and other reviews as soon as possible. All reviewers are requested to post this after-rebuttal-response. | Summary: This paper proposes a world model architecture (STORM) to train RL agents in imagination. The world model is composed of an autoencoder with categorical latents and a Transformer. These modules are trained jointly with a reconstruction loss, a next latent state prediction loss, as well as reward, episode termination, and representation losses. Experiments in the Atari 100k benchmark indicate that the approach is effective.
Strengths: - The method is technically sound with empirical results to back its effectiveness (outperforms other methods based on learning in imagination).
- It is faster to train than similar methods (e.g. twice as fast as TWM).
- The paper is well-written and easy to follow.
Weaknesses: - The method lacks novelty and the incremental improvements over previous work are not properly explored.
- STORM is a variant of TWM with two minor modifications: (1) a vanilla Transformer is used instead of a TransformerXL, (2) latent state and actions tokens are fused instead of having separate tokens.
- Currently, it is not clear why STORM achieves better results than TWM. Is it due to implementation details or the proposed modifications actually matter? And if they matter, why?
- The results have limited significance. Although STORM outperforms TWM, it only yields marginal improvements over DreamerV3, which was not specifically optimized for Atari100k. Moreover, recent model-free methods [1, 2] achieve similar or better results in the benchmark.
- Ablations and additional experiments fail to provide additional insights:
- lines 237-238: IRIS also relies on an autoencoder trained with a reconstruction loss and it still obtains good results on Breakout and Pong.
- l261: comparing the number of layers for STORM and IRIS/TWM is not straightforward since the hidden dimension is not the same (256 vs 512).
- 5.2: it is hard to draw any conclusions as only a few environments were considered and we do not know how they were picked. Also it is not clear if including z_t yields statistically signficant improvements.
- 5.3: the premise of this section is interesting but again too few environments are considered and results have limited statistical significance. The improvement on Freeway over previous methods relies on the addition of an extra demonstration while the other methods use exploration strategies that do not involve expert data. It would be interesting to know whether STORM leverages demonstrations better than other world models.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ### Do you use sticky actions?
---
I ran your code, on the 4 following games, with and without sticky actions:
| Environment | Reported | Without sticky actions | With sticky actions |
| --- | --- | --- | --- |
| BankHeist | 641 | 1044.0 | 218.5 |
| Breakout | 16 | 29.3 | 11.2 |
| MsPacman | 2673 | 2942.0 | 1921 |
| PrivateEye | 7781 | 4458.4 | 100.0 |
Can you clarify whether you used the Atari environments with (`ALE/<envname>-v5`) or without (`<envname>NoFrameskip-v4`) sticky actions? It is not mentioned in your paper but the code seems to use the v5 environments by default (TWM, IRIS and DreamerV3 do *not* use sticky actions).
It seems that my `v5` results are significantly below the reported results, and that the `v4` are on-par/slightly better.
### Ablations to investigate the differences with TWM
---
The paper would greatly benefit from a thorough investigation of the differences with TWM. It should be made clear why STORM performs better than TWM as it seems like a variant with two minor modifications.
- Is the substantial performance improvement explained by implementation differences (in particular, there is [an open issue about reproducibility](https://github.com/jrobine/twm/issues/3) on TWM’s repo), or by the two modifications?
- If these modifications matter, can you run ablations to demonstrate their effectiveness?
- Maybe it is due to the incorporation of tricks for policy learning from DreamerV3?
I am keen to significantly increase my overall rating if this concern is properly addressed during the rebuttals.
### Other concerns
---
- In my opinion, there would be a substantial gain to experiment in more complex environments, e.g. Crafter [3], Minecraft [4], or Memory Maze [5]. Such results would expand our knowledge of what Transformer-based world models can achieve.
- Can you include the recent work on the benchmark [1, 2] in the related work section?
---
### References
- [1] *Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier*. D'Oro, Pierluca and Schwarzer, Max and Nikishin, Evgenii and Bacon, Pierre-Luc and Bellemare, Marc G and Courville, Aaron. The Eleventh International Conference on Learning Representations, 2022.
- [2] *Bigger, Better, Faster: Human-level Atari with human-level efficiency*. Schwarzer, Max and Obando-Ceron, Johan and Courville, Aaron and Bellemare, Marc and Agarwal, Rishabh and Castro, Pablo Samuel. arXiv preprint arXiv:2305.19452, 2023.
- [3] *Benchmarking the Spectrum of Agent Capabilities*. Hafner, Danijar. International Conference on Learning Representations, 2021.
- [4] *Minerl diamond 2021 competition: Overview, results, and lessons learned*. Kanervisto, Anssi and Milani, Stephanie and Ramanauskas, Karolis and others. NeurIPS 2021 Competitions and Demonstrations Track, 2022.
- [5] *Evaluating Long-Term Memory in 3D Mazes*. Pasukonis, Jurgis and Lillicrap, Timothy P and Hafner, Danijar. The Eleventh International Conference on Learning Representations, 2022.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discussed some technical limitations in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your effort spent reviewing our paper and providing many valuable suggestions. We will include the suggestions in the revised version. Below, we want to address the main concerns raised in the review.
- Response to **Weakness 1** and **Question 2** about `comparison with TWM`: For further experiments, please refer to our **Response 3** in the **Author Rebuttal** above. As you pointed out, STORM demonstrates a training speed twice as fast as that of TWM, while also achieving superior performance scores. We believe that this achievement represents a significant and meaningful innovation.
- Response to **Weakness 2** `marginal improvements compared to DreamerV3 & not as good as BBF[2]`: Although the improvement in overall score was not significant (112% to 126%), STORM is equal or better than DreamerV3 on 21 out of 26 games.
While global human mean and median metrics offer an overall view of an RL algorithm's performance, it is common to observe variations in different environments and methods. To illustrate, let's consider BBF[2] for comparison:
| Algorithm | Human mean & human median | No. of envs that STORM $\approx$ BBF | No. of envs that STORM > BBF |
| :-------: | :----------: | :-------------: | :-----------: |
| STORM | 1.267 & 0.584 | 4 | 5 |
| BBF | 2.247 & 0.917 | - | - |
Although BBF achieves nearly double the overall score compared to STORM, our method still outperforms it in five games, namely *Gopher*, *Hero*, *MsPacman*, *PrivateEye*, and *Qbert*. *Hero* and *PrivateEye* are two representative games in the Atari 100k benchmark that involve long-term exploration without rewards. In cases where BBF surpasses STORM, there are several environments where RL agents typically perform much better than humans. We believe that conducting research in such environments does not align with the expectations for future RL algorithms.
- Response to **Weakness 3**: We mainly choose factors that may have a major impact on the performance of STORM for ablation study. Previous works like DreamerV3, IRIS, or TWM have also conducted ablation studies on their distinct contributions, such as policy training strategies, number of tokens, policy input selection, and other hyperparameters. Let us clarify your concerns:
- The ablation study on the number of layers is not meant for comparison with other methods, but to provide readers with insights into STORM's configuration. In the realm of deep learning, increasing the number of layers in neural networks may generally lead to improved performance, especially with residual connections. However, such a correlation is not evident in STORM. As discussed in **Sections 5.1 and 6**, we hope this analysis can inspire researchers interested in designing new model-based RL algorithms or exploring novel environments.
- Conducting ablation studies on the entire benchmark demands approximately **7 (ablation studies) $\times$ 0.2 (days per game) $\times$ 5 (seeds) $\times$ 26 (games) = 182 (NVIDIA GeForce RTX 3090 days)** ,which is resource-intensive and infeasible effort given our time and funding budget at present. Therefore, we have chosen representative environments that are sensitive to different configurations, based on our experience with STORM's development. Of course, users should customize the configuration of STORM or the selection of the algorithms according to specific environments when employing as discussed above.
- For the demonstration trajectories, please refer to our **Response 1** in the **Author Rebuttal** above.
- Response to **Question 1** about `the use of sticky actions`:
- It's correct that we use `v5` environments as described in `train.py` and `train.sh`. However, we will respectfully disagree with the result that you provided. We re-conducted experiments on your listed environments (`v5`) and obtained similar results to what we reported in the paper. We suspect that the discrepancy might arise from the single seed used for training in your case, whereas the reported results in the paper represent the average of 5 runs (retrained with different seeds). As plotted in **Appendix A**, all results in your table fall within a reasonable error range. If all 5 seeds produce such results, please kindly let us know, as it would be quite surprising.
- The use of sticky actions, recommended in [1], allows us to verify the algorithm's robustness. At the project's outset, we decided to adopt the latest environment configuration, using `v5` instead of `v4`.
- We have also tested our algorithm on your listed environments without sticky action and found that, while there was no significant difference in the other three environments, there is an improvement for the results on the game *Breakout*. This improvement can be attributed to the sensitivity of actions like catching, serving the ball, and moving the slider in this type of game.
- Response to **Question 3**, other concerns:
- Thanks for your helpful suggestion. Agreed. Indeed, performing additional experiments on new/complex environments would be helpful for sufficiently demonstrating the efficiency of STORM and understanding the performance limit of Transformer-based world models for RL. Nonetheless given our time and funding budget, we will leave it for future research. Please also refer to our **Response 1** in the **Author Rebuttal**.
- Thank you for your suggestion, and we will include these two papers in the related work section.
---
**References**
[1] Machado, Marlos C., et al. "Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents." *Journal of Artificial Intelligence Research* 61 (2018): 523-562.
[2] *Bigger, Better, Faster: Human-level Atari with human-level efficiency*. Schwarzer, Max and Obando-Ceron, Johan and Courville, Aaron and Bellemare, Marc and Agarwal, Rishabh and Castro, Pablo Samuel. arXiv preprint arXiv:2305.19452, 2023.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal!
Comment: I thank the authors for their response. I appreciate the effort made to compare STORM with TWM and the inclusion of missing papers in the related work.
Overall, the idea of the method suffers from a lack of novelty compared to TWM. However, given that the proposed method is faster and that the results are significantly better than TWM (and obtained in environments with sticky actions), I am convinced that this work would be of interest to the community. I am updating my rating from 5 to 7.
---
Reply to Comment 1.1.1:
Comment: We appreciate you taking the time to read our rebuttal and reconsider our work. Thanks for your thoughtful feedback and recognizing our contributions. | Summary: The authors propose several modifications to the recently proposed Transformer based world model for Model-based reinforcement learning. Specifically, they come up with a single latent stochastic state and treat action as an explicit input to the state as opposed to a token (as in previous works) and show significant performance and speed in limited regime of Atari 100k tasks.
Strengths: 1. Improved performance as well as significant time reduction in STORM's agent training which is crucial in MBRL agents alongside sample efficiency.
2. Well written paper overall and was easy for me to read in a single go (NOTE: I'm very familiar with this MBRL space so that is another reason that has contributed to this but overall the flow of the paper was very intuitive.)
Weaknesses: 1. **Highlighting differences between TWM, Dreamerv3, Transdreamer, IRIS, SimPLe**:
(a) It would be much better to have a table indicating different aspects such as (a) RNN (b) multiple tokens (c) Transformer variant etc. This would be much easier for the reader to read. Additionally, I did not find the motivation behind some of these modeling choices (see below for detailed descriptions).
(b) On L127 the authors write "In contrast to IRIS [12] that employs multiple tokens, STORM utilizes a single stochastic latent variable to represent an image." This is correct but TWM encoding style and STORM's encoding style are the same -- so I find it a bit misleading to omit TWM as all three (TWM, IRIS and STORM) are transformer models
(c) On L129, re: "STORM follows a vanilla Transformer [15] structure, while TWM [11] adopts a Transformer- XL [19] structure." -- it would be helpful why one would like to use a vanilla Transformer as opposed to Transformer-XL? What are the benefits?
(d) On L131, "TWM [11] treats observation, action, and reward as three separate tokens of equal importance." -- I am not sure if this statement is true. The input to the transformer in the case of TWM is (obs, action, rew) but that doesn't imply that they are equal -- the attention weights of the transformer would be the ones deciding whether to consider these tokens or not. See Figure 6 of TWM paper showing the attention map over $(s, a, r)$ and it is clear that not all are weighted equally.
(e) On L134, "Unlike Dreamer [8] and TransDreamer [16], which incorporate hidden states, STORM reconstructs the original image without utilizing this information." -- What is the additional benefit of not using the hidden state? I don't think there is a significant reduction in time for the reconstruction.
2. **Performance**:
(a) What specific component according to the authors attributes to the superior performace of STORM. L235-237 mention the self-attention mechanism -- which is a valid argument against the RNN based methods. However, it is unclear to me what is helpful in STORM when compared to IRIS or TWM.
(b) L237-238: what do authors mean by the "nature of autoencoders?" For example, the encoding style in TWM and STORM is very similar. It is unclear to me why specifically to single object games does STORM perform poorly. I'd like the authors to elaborate on this.
3. **"Decoder at rear" experiment**: I am not sure what exactly the purpose of this experiment was. From what I understand, the model can be formulated as TSSM (as mentioned in TransDreamer paper), and hence the the reconstruction would be using the posterior $z_t$ and not the prior $\hat{z}_t$. Is there something that I'm missing here?
4. **Impact of trajectory**:
(a) How exactly was the trajectory used in the replay buffer? Was there any prioritized replay buffer or was the sampling of trajectories for the training of world model uniform?
(b) Inclusion of trajectory is *not* specific to STORM -- so I'm not sure how this experiment validates the usefulness of STORM specifically and not apply to other world models (TWM, IRIS etc).
**[Very minor comment -- not considered for rating the paper]**
5. In appendix, Table 9, it would be more clearer to denote the "Imagination batch size" by another symbol or $B \times T$ as that is effectively what happens during imagination.
----
**Rationale for my rating**
I think the model details mentioned in the paper are important to share with the broader community, however I do think that motivation for several choices made in the modeling of STORM that has *not* been explicitly provided.
Additional it is not clear to me that experiments such as "Decoder at rear" or the inclusion of trajectory are fundamentally because of STORM and seem more of a study on Dreamer-*like* paradigm.
I would like the authors to address my concerns above and for the time being, I have leaned towards Borderline reject. I will update my rating post-rebuttal and discussion with the authors.
-----
**Post-rebuttal rating**
Based on the rebuttal provided by the authors, I've decided to bump up my rating to *Weak Accept* as they have addressed a majority of my concerns. The only section that I'm currently unconvinced is the inclusion of demonstrations which is not the major contribution of this work and would require its own analysis instead (a single section won't suffice in my opinion). I once again thank the authors for their comprehensive rebuttal.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See **Weaknesses** section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your effort spent reviewing our paper and providing many valuable suggestions. We will include the suggestions in the revised version. Below, we want to address the main concerns raised in the review.
- Response to **(1a, 1b)**: We sincerely appreciate your suggestion, and we will include a table of differences between these algorithms in the updated version to enhance the clarity and motivation of the modeling choices.
- Response to **(1c)**: Transformer-XL was proposed to increase attention length and handle extremely long text in the NLP tasks. However, a vanilla Transformer structure like BERT was still proved to have a memory capacity of 512 tokens (as reported in their paper, Appendix A.2), which is much longer than the regular sequence modeling horizon in RL problems. STORM, Dreamer, IRIS, and TWM typically have a sequence modeling horizon ranging from 16 to 64. Given these considerations, the use of Transformer-XL is unnecessary and may lead to a drawback in performance and runtime.
- Response to **(1d)**: We apologize for this confusion. As correctly pointed out, TWM does provide different attention weights to different tokens. What we want to express here is that all observations, actions, and rewards are involved in the same self-attention process as input tokens, and performing self-attention across different types of data (observation, action, and reward of different physical meanings) may potentially negatively impact or limit the performance (**L112**).
- Response to **(1e, 3)**, `Why ablations on "decoder at rear"`: Yes you are correct about this, "decoder at rear" means the the model is formulated as TSSM.
DreamerV3 is an RNN-based algorithm, that enables the fusion of observations and actions into the hidden state step by step, which naturally facilitates the observation reconstruction with historical information. In contrast, Transformer-based world models process a sequence of observations and actions simultaneously, leading to two options for the decoder position: **(1) decoder at rear** (like TSSM in TransDreamer) to reconstruct the original observation using the hidden state $h_t$ and prior $\hat{z}_t$, and **(2) decoder at front** (STORM), which resembles the original structure of the variational autoencoder.
TransDreamer needs a great number of samples to converge in Atari games (see Figure 5 in their paper), which inspired us to further investigate it. As the ablation study presented in **Section 5.1**, we found that the structure of STORM is superior, and the "decoder at rear" approach struggles to converge with a 100k sample budget. This suggests that the variational reconstruction loss may not effectively drive the training of a Transformer-based model between the stochastic variable and the decoder, or the reconstruction loss might be excessively influenced by the KLDiv loss. In conclusion, this design is primarily for improving final performance, rather than lowering computational complexity.
- Response to **(2a, 2b)**: Please refer to our **Response 2&3** in the **Author Rebuttal** above.
- Response to **(4a)**: In our experiments, the samples in the demonstration buffer and the online buffer are uniformly sampled separately. For initiating the imagination process, a batch consists of 4 demonstration trajectories and 16 online trajectories.
- Response to **(4b)**: Please refer to our **Response 4** in the **Author Rebuttal**.
- Response to **(5)**: We appreciate your constructive suggestion, and we will explain the batches further in the header of Table 9 in the revised paper.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal. Most of my concerns have been addressed!
Comment: Thanks to the authors for the detailed rebuttal and response to all my questions. I appreciate their effort on this end.
**1c** I agree with the rationale and for a benchmark on ATARI more than 512 tokens in unnecessary. However, for more long-horizon tasks where memory plays a key role, Transformer-XL can be good choice (as shown in TransDreamer).
**1d** Thanks for acknowledging this. I do agree that computationally it does get expensive to send an additional *n* tokens for rewards -- especially if we are thinking of real-world experiment or might lead to negative learning of policy.
**3** Thanks for the detailed explanation on this and adding this to the main paper would be incredibly helpful to the reader.
A majority of my concerns (1-3) are addressed however I am not fully on-board with the rationale behind Section 5.3 **Impact of the demonstration trajectory**. The benefits of inclusion of a demonstration is trivial and has been shown in [1]. However, as most of my concerns are addressed I'd like to increase my vote to Weak Accept and vote towards acceptance of the paper.
Contrary to reviewer E5z3, I do believe that Atari 100k is a challenging and sufficient benchmark to evaluate this work. Additional experiments on other environments would be nice to have but the current experiments, in my opinion, are sufficient to back up the claims.
----
References:
[1] Multi-View Masked World Models for Visual Robotic Manipulation, Younggyo Seo, Junsu Kim et al, ICML 2023
---
Reply to Comment 1.1.1:
Comment: Thanks very much for taking the time to read our rebuttal and updating your score! Thanks for your thoughtful feedback and recognizing our contributions!
We believe that the inclusion of `demonstration/expert trajectories` is full of potential and remains to be further explored. Indeed, our current investigation of this idea is via a toy example but it may lead to a possible solution to few-shot RL in the future. Thanks for pointing out the reference! This concurrent work also reveals the benefits of the technique (it was published on May 31st, while the NeurIPS abstract deadline is May 11th).
We also thank you for supporting our experiment settings on Atari 100k.
---
Rebuttal 2:
Comment: Dear Reviewer,
The author has posted their rebuttal, but you have not yet posted your response. Please post your thoughts after reading the rebuttal and other reviews as soon as possible. All reviewers are requested to post this after-rebuttal-response. | Summary: The paper presented a Transformer-based model-based RL framework. As with earlier approaches, online data is gathered into the replay buffer with the learned reactive policy, the Transformer-based world model is trained with segments sampled from the replay buffer, then the policy is optimized with imaged data generated from the world model. The improved results were reported on the Atari 100K benchmark. The authors also conduct a thorough ablation study to justify heir architectural design choices.
Strengths: This paper is well-organized and easy to follow.
Weaknesses: - What criteria were employed for task selection in the ablation studies? They are not consistent over all the subsections, nor do they have the same trend in Table 1 compared to baseline models. The proposed method performs better in some tasks, while it doesn't in others. To provide a more robust evaluation of their proposed method, it would be better to consider extending the ablation studies to a wider range of tasks
- It would also strengthen the paper if the authors could provide the converged results on the full Atari benchmark.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your effort spent reviewing our paper and providing many valuable suggestions. We will include the suggestions in the revised version. Below, we want to address the main concerns raised in the review.
- Response to **Weakness 1**`criteria for task selection in the ablation studies`:
- Conducting ablation studies on the entire benchmark demands approximately **7 (ablation studies) $\times$ 0.2 (days per game) $\times$ 5 (seeds) $\times$ 26 (games) = 182 (NVIDIA GeForce RTX 3090 days)**, which is resource-intensive and infeasible effort given our time and funding budget at present. Therefore, we have chosen representative environments that are sensitive to different configurations, based on our experience with STORM's development.
- Previous works like DreamerV3, IRIS, and TWM conduct ablation studies on their special contributions, such as policy training strategies, number of tokens, choice of policy Input, and other hyperparameters. In **Section 5.1**, we present the impact of model design and configuration, which are essential considerations when implementing a Transformer-based world model structure. In **Section 5.2**, we delve into the impact of policy input selection, an aspect also studied in TWM, providing researchers with valuable insights into STORM and model-based reinforcement learning. In **Section 5.3**, we analyze the influence of a single demonstration trajectory, highlighting that the challenging exploration issue in RL can potentially be addressed by integrating external knowledge and world models in future research. We believe this analysis could inspire further investigations in this field.
- Regarding concerns about performance (`The proposed method performs better in some tasks, while it doesn't in others`), we also address them in our response to **Reviewer rfCq Weakness 2**.
- Response to **Weakness 2** `It would also strengthen the paper if the authors could provide the converged results on the full Atari benchmark`: Please refer to our **Response 1** in the **Author Rebuttal** above.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank the authors for the clarification. I would like to keep my score as it is.
---
Reply to Comment 1.1.1:
Comment: We express our gratitude for your consideration of our rebuttal. Please kindly let us know if you have further questions. Also please consider raising your score if you agree with our rebuttal. | Rebuttal 1:
Rebuttal: We express our gratitude for the valuable feedback and for the recognition of STORM's efficiency as emphasized by reviewers (4iYJ, rfCq, E5z3), along with the commendation for the paper's coherent structure and presentation, as indicated by reviewers (Cg1v, 4iYJ, rfCq). In the subsequent discussion, we address the main common points raised in the reviews.
1. Response to `STORM should be extended to other tasks like MuJoCo, Crafter, Minecraft, etc`: We appreciate the constructive suggestion and acknowledge the potential benefits of extending STORM to real-life applications like robot control or real-time game control learning tasks. However, due to the substantial computational resources required, evaluating our algorithm on a wide range of complex benchmarks, such as MuJoCo, Crafter, and Minecraft, is currently infeasible given our time and funding budget. For instance, to evaluate our algorithm on full Atari games with 200M sample steps, it demands approximately **5 (days per game) $\times$ 5 (seeds) $\times$ 55 (games) = 1375 (NVIDIA GeForce RTX 3090 days)**. While we aim to explore such extensions in the future, we think it is beyond the scope of this paper.
Meanwhile, the selection of the 26 Atari 100k games in the experiments is routine and has been done in previous work such as IRIS, DreamerV3, TWM, SimPLe, the baseline methods used in our paper. It is thus commonly acknowledged that such a selection and evaluation is sufficient to verify the effectiveness of these algorithms. We believe such an evaluation method is sufficient to verify the effectiveness of the algorithm.
2. Response to `Why IRIS, a Transformer based method, performs better than STORM on some tasks`: IRIS is different from SimPLe, Dreamer, TWM, and STORM. It maps an image observation to 4$\times$4 or more tokens through a VQ-VAE structure. This design is capable of more precisely capturing the position and motion of small objects like in *Pong* and *Breakout*. However, the advantage comes with a trade-off in terms of slow training and inference speed.
3. Response to `Extra ablation studies on structures of STORM and TWM`: We made modifications to our code so that the world model is organized as TWM, and did **addtional experiments** on several environments. We trained the world model with a vanilla Transformer and trained the agent's policy with $s_t = [h_t, z_t]$. We did not use their "balanced dataset sampling" trick. The results obtained are as follows:
| Environment | STORM (paper) | STORM modified as TWM | TWM (paper) |
| :---------: | :-----------: | :-------------------: | :---------: |
| Alien | 984 | 822 | 675 |
| BankHeist | 641 | 394 | 467 |
| Breakout | 16 | 12 | 20 |
| Pong | 11 | 7 | 19 |
| MsPacman | 2673 | 1829 | 1588 |
| UpNDown | 7985 | 7092 | 15982 |
Using the TWM modeling approach in several environments can harm the performance. Still, we acknowledge that comparing algorithms with different implementations can make fair ablation studies challenging.
However, it is crucial to highlight that even though STORM and TWM achieve similar scores, the training cost differs significantly due to the $\mathcal{O}(n^2)$ complexity of self-attention operation, with a ratio of $3n_{\mathrm{STORM}}=n_{\mathrm{TWM}}$. Specifically, the training hours required on a GeForce RTX 3090 are as follows:
| Algorithm | STORM | STORM modified as TWM | TWM | Real sampling time |
| :------------: | :---: | :-----------------: | :--: | :----------------: |
| Training hours | 4.3 | 10.5 | 12.5 | 1.85 |
It is noteworthy that current model-based RL algorithms exhibit slower convergence rates compared to real-time sampling, which means one will need several expensive computing devices to control a DayDreamer[1]-like robot. In contrast, using STORM is more economical and environment-friendly under such circumstances.
4. Response to `Demonstration trajectories`: We agree that the inclusion of demonstration trajectories may also benefit similar algorithms like TWM, IRIS, or Dreamer. While developing our approach, we found that certain games, such as *Freeway*, appear relatively easy for humans who understand or infer the rules quickly from past experience, but they are challenging for algorithms like STORM, DreamerV3, and IRIS. Interestingly, we experimentally observed that including even a single demonstration trajectory has a substantial impact on the performance of *Freeway*, confirming similar findings mentioned in the Appendix H of the IRIS paper.
We believe that combining demonstration and model-based RL offers a promising solution to address the challenging exploration issue. However, we acknowledge that this method has some limitations, as discussed in **Section 5.3 (Lines 289-292)**, possibly due to the neglect of on-policy distribution, as mentioned in **Section 6 (Lines 321-325)**. As a result, we present the results on three typical games to demonstrate the potential benefits of demonstration trajectories while acknowledging these limitations. We hope to draw the attention of other researchers to further explore this avenue and evaluate the merits of integrating demonstration trajectories with model-based RL techniques.
---
**References**
[1] Wu, Philipp, et al. "Daydreamer: World models for physical robot learning." *Conference on Robot Learning*. PMLR, 2023. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Probabilistic Invariant Learning with Randomized Linear Classifiers | Accept (poster) | Summary: The authors propose probabilistic linear classifiers (RLCs) that are able to solve relatively general binary classification problems. They derive a trade-off between the probability that sampling of RLCs leads to an accurate prediction and the number of samples to obtain the majority vote of an RLC.
Furthermore, there discuss more detailed results for three invariant learning problems related to inputs in form of sets, graphs, or spherical data.
Strengths: - **Clarity:** The paper is well written and well understandable despite a strong theoretical component.
- **Originality:** The authors propose to use probabilistic binary classifiers instead of deterministic ones to trade-off the number of required trainable parameters with prediction accuracy. A similar idea has been put forth by Sieradzki et al. (ICML22) for a number of interesting examples. This work seems to be the first to use randomized models for invariant learning (and relatively general binary classification problems).
- **Significance:** The idea is significant because it discusses a novel problem solution perspective to overcome some computational barriers of the deterministic setting.
- **Quality:** Mathematically rigorous analysis and exposition of the main idea. As a highlight, Theorem 1 gives a practically relevant estimate of the trade-off between required samples and the probability of accurate predictions.
The experiments could be designed in better support of the theory but still point out some potential benefits of RLCs for binary classification problems on sets and graphs.
**Detailed points of strengths:**
- The number of parameters used by RSetC does not depend on the set size, in contrast to DeepSets. Yet, the resource consumption depends on the smoothness of the distribution of linear classifier weights and bias. (Note that this might also imply disadvantages over DeepSets in some settings.)
- RGraphCs can approximate any problem that either has a smooth boundary or can be tested with an inner product as in Definition 3. This includes the classification of graph connectivity (which cannot be solved by simple GNN architectures).
Weaknesses: - The analysis is limited to binary classification problems.
- The claimed computational resource savings are not obvious from the exposition and not directly covered by the theorems. According to the theory, the number of required parameters of RLCs could be higher than the ones of a deterministic solution (by a constant that could be very large). The theory does not establish that a smaller number of samples m (leading to potentially lower accuracy) would work in combination with fewer neural network parameters.
(Some potential benefits for invariant learning are established based on reduced input dependence. But the changed criteria related to the smoothness of decision boundaries could also lead to worse results for RLCs in some situations.)
- The experiments were not designed clearly to highlight computational advantages but to show accuracy improvements on small scale tasks, in which the baselines cannot perform reasonably well because of an unrealistic computational resource constraint.
It is likely that the baselines would actually achieve much higher accuracy with sufficient model capacity. Could RLCs actually perform at least on par if equipped with the same capacity that allow the baselines to work well?
- The statement in Line 338: "This comes from the fact that GNNs cannot distinguish simple non-isomorphic graphs such as any pair of d-regular graphs. As such, GNNs cannot universally approximate tasks that assign different labels to any of such pairs. In contrast, RGraphCs can approximate any problem that either has a smooth boundary or can be tested with an inner product..."
is not entirely correct. This limitation only applies to simple GNN architectures. Modern architectures that use attention and suitable node labels do not suffer from this problem. Experiments that compare with such stronger baselines would be more informative regarding the practical implications of the proposed work.
- Conceptually, RLCs might require fewer trainable parameters in some cases, which could lead to advantageous memory requirements. Yet, they are not necessarily computationally more efficient at inference, which is not discussed at all in the paper. The fact that each inference needs to evaluate the trained neural network m times could increase the associated FLOPS significantly.
- For the same reason, the experiments might not be fair, as the baselines are allowed fewer computational resources at inference than the RLCs.
- **Reproducibility:** No code has been shared.
**Points of minor critique:**
- Line 297: $u_b$ is listed twice instead of once.
- Figures: It would help to color also confidence intervals because it currently is impossible to distinguish them when they are overlapping.
- The size of the validation set is not mentioned (neither in the main paper nor the Appendix). Only the size of the training and test set is given.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - How many samples m did RsetC and RgraphC use in the experiments? This seems to be quite an important parameter to determine the maximum achievable accuracy and the required computational resources for the evaluation of RsetC and RgraphC.
- Experimental set-up in support of story:
I would like to understand the required computational resources in an experimental context that directly supports the theoretical claims. 1) Start from good baselines that have enough neural network capacity to actually solve the discussed problems. 2) Then design RLCs that achieve a similar accuracy and compare the used computational resources. In this context: How many FLOPS does the evaluation of RLCs cost? How many trainable parameters do they use?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations have been pointed out under weaknesses and were partially discussed by the authors. I do not foresee a major or immediate negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable comments. We are glad the reviewer appreciated our contributions. In the feedback summary we address your questions about i) settings where the deterministic models succeed and how it can be compared to RLCs in terms of resources and ii) how the number of samples $m$ can impact our resources. We will also take in your suggestion about color coding in the plots for the final version.
> According to the theory, the number of required parameters of RLCs could be higher than the ones of a deterministic solution (by a constant that could be very large).
In the proof of Theorem 1 we can see that the constant cannot be arbitrarily large. If $p^\dagger$ is the number of parameters that an MLP with 1 hidden layer needs to solve the task, the constant will be at most 2. We thank the reviewer for pointing out this is not clear in the main text and we will make it explicit in the final version.
> The statement in Line 338: [...] is not entirely correct.
Thanks, we will clarify that we refer to GNNs with expressiveness bounded by the Weisfeiler Leman test. Note, however, that the mentioned node labeling (e.g., random features) methods do not have the invariance property, the main theme of our work.
> "How many samples m did RsetC and RgraphC use in the experiments?
10, we will add to the main paper in the final version.
---
Rebuttal Comment 1.1:
Title: Estimate of FLOPs
Comment: I thank the authors for the clarifications.
Could they also report the FLOPs associated with their methods in comparison with the baseline? The information that is currently provided suggests that baseline models with sufficient capacity can achieve a higher accuracy potentially. However, how big are the achieved computational savings exactly?
Or could the accuracy of the proposed methods be boosted by increasing the number of samples?
---
Reply to Comment 1.1.1:
Title: FLOPs
Comment: We thank the reviewer for the interest in our work.
> Comparison of FLOPs.
This is a great question. Our theory shows that our models can achieve universality with a number of parameters independent of the maximum size of the input sets and graphs---an important distinction between RLCs and deterministic classifiers, because these maximum sizes are not typically known in practice. But, as you suggest, we cannot provide a guarantee on the time complexity (or number of FLOPs), and certainly one could design settings in which RLCs need many samples to provide accurate predictions with high probability.
To address your question empirically, we've re-run our in-distribution experiments using a DeepSets model that uses at least as many FLOPs as the RLCs. Since we cannot upload pdfs in the comments, we write in a separate comment a table with results. As you can see the results are similar to the results in the paper (with RLCs dominating at large set sizes). Thanks for asking this question; we will include this result in the final draft.
> The information that is currently provided suggests that baseline models with sufficient capacity can achieve a higher accuracy potentially.
This is certainly true for Deep Sets when the hidden layer size matches the maximum input set size, but it's never true for GNNs (see line 338). Our benefits in set tasks are in the regime of a constant number parameters.
> Or could the accuracy of the proposed methods be boosted by increasing the number of samples?
Certainly it can increase, but as we noted in Theorem 1 and in the feedback summary, the predictions converge exponentially with the number of samples, thus this benefit tends to be limited by a few samples ---due to our sample efficiency.
We thank the reviewer once again for their valuable feedback, we will add this discussion to the final version. | Summary: The paper introduces a novel approach for achieving universality and invariance in binary classification tasks, while minimizing computational requirements. Instead of relying on deterministic neural networks such as DeepSet, which have parameterization complexity proportional to the set size, the paper proposes using randomised linear classifiers (RLCs). These RLCs can maintain invariance to compact group transformations and provide a universal approximation. Importantly, the parameterization complexity of RLCs remains independent of the set size. Building upon this finding, the paper extends the design of RLCs to ensure invariance in classification tasks involving sets, graphs, and spherical data. Experimental results demonstrate that the proposed RLCs outperform DeepSet and GNN approaches in certain invariant tasks.
Strengths: • The paper makes a notable theoretical contribution by presenting a novel approach that utilizes randomness to achieve universality and invariance in binary classification tasks. As far as my knowledge extends, this is the first approach of its kind to utilize randomness in this context.
• The incorporation of de Finetti's, Aldous-Hoover's, and Freedman's theorems is a significant contribution of the paper. These theorems play a crucial role in making RLCs applicable and practical for dealing with set, graph, and spherical data. By leveraging these theoretical foundations, the paper provides valuable insights into achieving invariance and universality across various data domains. This expands the scope of invariance and opens up new possibilities for addressing similar challenges in different contexts.
Weaknesses: My main concern pertains to the scalability of the proposed approach. While deterministic neural networks only require a single feed-forward process to produce an output, RLCs necessitate the sampling of multiple randomnesses to compute the linear weights. This introduces computational complexity during both training and testing phases. Consequently, I am particularly interested in understanding how RLCs perform on real-world tasks compared to deterministic models like DeepSet and GNN. It remains to be seen whether RLCs can deliver competitive results in practical scenarios, considering the additional computational demands imposed by their inherent stochastic nature.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: • What impact does the hyperparameter $m$ have on the model's performance? It appears that a larger value of $m$ is necessary to achieve highly confident predictions. This suggests that increasing $m$ allows the model to gather more evidence and make more precise decisions. It would be valuable to investigate how different values of $m$ influence the model's accuracy and confidence levels across various tasks and datasets.
• The impact of random distribution on model performance is a noteworthy aspect to consider. I am curious about whether different randomness would affect the model performance. What kind of randomness do you use in the experiment?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper presents some notable limitations despite its promising theoretical results:
• Scalability: The applicability of the proposed RLCs appears to be challenging in large-scale scenarios compared to deterministic methods. Further investigations and enhancements are needed to improve scalability and address potential computational complexities.
• Unclear impact of $m$ and randomness: The paper would benefit from additional discussions and experiments to explore the effects of the hyperparameter $m$ and the selection of randomness sources. This would provide better insights into the optimal choices and their influence on model performance.
• Lack of real-world tasks: The evaluation of the proposed method is limited to synthetic tasks, and it would be valuable to extend it to real-world datasets. This would allow for a more comprehensive understanding of the method's capabilities and potential applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review! In the feedback summary we address your questions about $m$ and the choice of randomness source. Please, let us know if you have any extra input, we'd be interested in discussing. | Summary: This work presents a very interesting method for training efficient invariant classifiers by leveraging randomness. More precisely, it proposes to train a neural network to sample linear classifier weights, by pushing forward some data-independent distribution, and using the majority vote over sampled classifiers to make predictions. It is rather a theoretical paper which proves universal approximation and G-invariance theorems for this class of classifiers (adapted to the probabilistic setting). The special cases of set and graph invariance are studied (with relaxed assumptions). Spherical data is also addressed in the appendix. A few toy experiments illustrate the theoretical results proved.
Strengths: ### Originality
Although the paper borrows some ideas from CFNNs, it is largely original in its restriction to linear classifiers and focus on invariant representation learning.
### Clarity
The paper is clearly written and well-structured.
### Quality
I read two out of three proofs from the supplementary material and they look fine to me. Toy experiments are well-designed.
### Significance
This work addresses the important question of invariant representation learning with a very original method that has some benefits compared to existing methods.
Weaknesses: ### Originality
1. The only remark I would have here is that there are a few references missing from the invariant representation learning paragraph of the related work section. Namely, there are methods which are quite different from the proposed one but which also make use of randomness to obtain invariant representations, such as Augerino (https://arxiv.org/abs/2010.11882) and AugNet (https://arxiv.org/abs/2202.02142).
### Clarity
2. The only remark I have is that the error bars on the figures are difficult to associate to each dot as they are not color-coded.
Typos:
- l.126: supp(x)
- l. 297: “$u_b$, and $u_b$” ($u_b$ twice)
### Quality
I only have two remarks/questions regarding the method and experiments:
3. I realize that the paper contribution is rather theoretical, but the experiments are very very toy. I realize the value of the experiments carried out, but I wonder whether it would be possible to showcase the method in a more realistic setting.
4. As far as I understood, the model invariance relies on underlying theorems specific to each setting (de Finetti for sets, Aldous-Hoover for graphs and Freedman for spherical data). Is that right? It would be interesting if you could discuss to which extent your framework can be adapted to arbitrary symmetry groups and whether this is a limitation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See questions above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: A few limitations are discussed throughout the paper, but not very thoroughly. A major limitation for now is that this is a very preliminary study showcasing the new method only on toy binary classification problems. Although one cannot expect it to address all research questions in a single paper, some directions on how the proposed approach could have practical use would be welcome.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your feedback. We refer the reviewer to our feedback summary for an additional experiment. We will also take in your suggested references in the final version. If you have any other input we'd be happy to discuss.
> "As far as I understood, the model invariance relies on underlying theorems specific to each setting (de Finetti for sets, Aldous-Hoover for graphs and Freedman for spherical data). Is that right? It would be interesting if you could discuss to which extent your framework can be adapted to arbitrary symmetry groups and whether this is a limitation."
Theorem 2 shows that our framework can be adapted to any compact group transformation. That is, there is an equivalence between designing invariant distributions and designing invariant RLCs. Any result characterizing distributions invariant to a compact group transformation, such as the ones you cited, can be used to design invariant RLCs. We will highlight this in the final version, thank you for bringing this up.
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal
Comment: I would like to thank the authors for their thoughtful rebuttal. I have read it, as well as the discussions with authors reviewers. Most of my concerns have been well addressed and I am hence raising my rating by one point. | Summary: The authors introduce Randomized Linear Classifiers, which are a way to randomly represent the weights of a linear classifier. Because there is a random reconstruction of the network, a sample majority of which is used to determine the actual inference of the network. The authors show a universal representation theorem for such a classifier, and show that the representation is quite efficient in some cases (for instance, sometimes the number of parameters needed does not scale with a relevant quantity, in which a deterministic network would scale).
Strengths: The notion that the number of parameters needed for $f_\theta$ can be much smaller than the number of parameters needed for a classifier that is a deterministic neural network is nice.
Weaknesses: From what I can tell, the paper only considers the expressivity of RLCs, and the number of parameters needed. Is there a result about learning $f_\theta$?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Typos in line 219, 287.
As a sanity check that I have understood this, you dont have access to the true gradients for $\theta$ until $m \to \infty$ right?
Is this similar to learning a one layer NN in which during training and inference some random subset of neurons is used each time (for forward and backward passes)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. Next, we address your comments. Please, let us know of any additional feedback you might have. We would be very happy to discuss.
> "From what I can tell, the paper only considers the expressivity of RLCs, and the number of parameters needed. Is there a result about learning $f_{\theta}$?
Our work address the well known computational resources vs. model capacity tradeoff in invariant models. We note that even in deterministic invariant learning the question of generalization/learnability is still to be understood, see for instance https://openreview.net/forum?id=HxeTEZJaxq
We believe that our probabilistic view on the problem can, in a future work, shed new light into the generalization capabilities of invariant models.
> "As a sanity check that I have understood this, you dont have access to the true gradients for $\theta$ until $m \to \infty$ right?"
Yes, we have gradient samples.
> "Is this similar to learning a one layer NN in which during training and inference some random subset of neurons is used each time (for forward and backward passes)?"
Yes, this is precisely the idea behind the proof of Theorem 1 as you can see in the appendix. | Rebuttal 1:
Rebuttal: ### **_Feedback summary_**
We thank all the reviewers for the valuable feedback on our manuscript. In general, reviewers found the paper to be well written and appreciated the novel direction given by our theoretical contributions. Here we address three common points raised across reviews. Then, we discuss specific comments separately in each reviewer's thread.
1. **Choice of external randomness:** Reviewers asked about the impact of the choice of randomness source. We highlight that our theory shows the suffiency of absolutely continuous sources. As in models such as GANs or VAEs, due to their capacity, neural networks can transport simple distributions, e.g., Gaussian, to arbitrarily complex ones. Our experiments do confirm that this also seems to be the case in RLCs ---note that RLCs solve supervised learning, a task easier than data generation (tackled by GANs and VAEs). Moreover, if there's a form of prior knowledge about the solution, i.e., the linear coefficients ideal distribution, one should leverage it. In fact, this is precisely what our invariant models are doing (through parameter-sharing), see Theorem 2.
2. **Number of samples ($m$):** Some reviewers asked about the impact of the number of samples $m$ in the computational complexity of the model. We point them to Theorem 1, where we show that we need only a few samples to output the model's true prediction with arbitrarily high probability. For instance, if the model outputs 1 with probability 2/3, with 3 samples we will output 1 with probability $>0.95$. This result is only possible due to the independence between input and randomness source.
4. **Experiments:** Reviewers raised two interesting points regarding our experiments. How do RSetCs compare to tasks when Deep Sets are supposedly good? Can we compare the resource consumption of RSetCs vs. Deep Sets when both are performing well in a task? To address them, we consider the task of deciding whether the majority of a set of random numbers is positive or negative. This task is essentially removing the sorting operation from the task in the paper ---where deep sets struggles. We then consider Deep Sets with one and two hidden layers vs an RSetC with a single hidden layer. The results are in the image attached. We see that the single layer RSetC has a performance more similar to the Deep Sets with two hidden layers, showcasing our parameter efficiency. The experimental setup was the same as the sort task in the main paper. We will add this experiment to the final version, thank you for the input!
Pdf: /pdf/a8ebeca28d145ec6c6c9cfd23a569585cb7a2c64.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work proposes a framework called Randomized Linear Classifiers (RLCs) that leverages external randomness to build models that are expressive and can encode invariance in the input space. The authors establish probabilistic versions of universal approximation theorem and invariance for several types of RLCs. The key insight from the theory is that by maintaining probabilistic universal approximation and invariance, RLCs can be more parameter-efficient than their deterministic counterparts. Numerical experiments verify the theoretic properties.
Strengths: - The paper is generally well-written and explains the implications of the assumptions and theorems clearly.
- RLCs extend the idea of Coin-Flipping Neural Network with stronger theoretic backgrounds and more general formulations and invariant extensions.
- The probabilistic notions of universal approximation and invariance potentially open up a new direction.
Weaknesses: - I am not so sure about the benefits of "Online computation" and "Private computation". What is the difference between standard online inference and RLCs? In the inference phase, the client downloads the model and computes the prediction without sending the inputs, so it is already private.
- I am not sure how the external randomness would affect the generalization performance of RLCs. From the theorems, it seems that the only requirement for the randomness source is to be absolutely continuous, thus MLPs can universally approximates? In the experiments, the external randomness was fixed as standard normal. I assume that the distribution of the external randomness would greatly affect sample complexity? For example, RLCs may require more restarts to amplify the probability if a bad external randomness is used? I think this part needs some further discussion.
- In the numerical experiments, the performance of DeepSets looks pretty bad, even worse than using a constant predictor. Is this normal? It would be more convincing if RLCs was examined on tasks that DeepSets performs good.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It would be interesting to see how the external randomness affects the generalization performance of RLCs.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. Please, refer to our feedback summary where we address your questions about the randomness distribution and the deep sets comparison. If you have any extra input, we would be very happy to discuss.
>I am not so sure about the benefits of "Online computation" and "Private computation". [...] What is the difference between standard online inference and RLCs?
The benefits are in terms of resource consumption. In standard online inference, one can ideed send (or store) the model, but this can be arbitrarily large. In our case, we simply need to send (or store) the pre-sampled linear coefficients. As we say in the private computation paragraph, this can be very useful in settings with low-resource computers, e.g., smartwatches. | null | null | null | null | null | null |
Weighted ROC Curve in Cost Space: Extending AUC to Cost-Sensitive Learning | Accept (poster) | Summary: This paper proposed more robust learning using combining the WAUC and cost learning. The authors claim that it can be vital to the shifts of cost function and covariate distribution. The algorithm is constructed using bi-level optimization with inner and outer parts. Experiments show high performance, especially in earning money in cost-sensitive situations.
Strengths: The approach is somewhat similar to Bayesian, considering the randomness. The motivation is clear, and the algorithm is well-established. Furthermore, this paper is successful in providing the appropriate cases.
Weaknesses: The presentation is not good. For example, the notation of $\hat{L}_{WAUC}$ is isolated in proposition 5.1. Also, there are many typos concerning the indexes in formulation and algorithm.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: There are some questions.
i) The definition of $\mathcal{K} (S_w^{-}, \tau) $ should be clarified.
ii) Is it the right notations $\nabla \hat{f}$ and $\nabla \hat{g}$ in Alg. 1?
iii) The effect of $T$ is strange. Usually, a large $T$ can achieve better performance. Can you explain this phenomenon correlated to $\alpha_k$ and $\beta_k$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No detailed limitations. Maybe, the algorithm can work well in a dynamic situation. However, the static situation is not thoroughly examined.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Author response to Reviewer Trb9
Thank you for your detailed and constructive feedback on the paper. We value your insights and have taken your suggestions into consideration. Here are our responses to your specific comments.
**Q(1) What the meaning of $\mathcal{K}(S_w^-,\tau)$**
**A(1)** We apologize for our unclear expression. $\mathcal{K}(S,\tau)$ is defined in the KDE definition in Appendix.C.1.
$$
\mathcal{K}(S, \tau)=\frac{1}{|S| m} \sum_{x_i \in S} K\left(\frac{x_i-\tau}{m}\right)
$$
**Q(2) Is it the right notations $\nabla \hat{f}$ and $\nabla \hat{g}$ in Alg.1?**
**A(2)** Accoring to the Eq.13, $\hat{f}$ and $\hat{g}$ are defined by:
$$
\hat{f}\left(\boldsymbol{w}, \boldsymbol{\tau}\^*\right):=\hat{\mathcal{L}}\_{WAUC}\left(\boldsymbol{w}, \hat{\boldsymbol{\tau}}^\* \right) \\\\
\hat{g}(\boldsymbol{w}, \boldsymbol{\tau}):=\frac{1}{n\_\tau} \sum\_{l=1}\^{n\_\tau} \hat{\mathcal{L}}\_{e q}\left(\boldsymbol{w}, \tau\_l, c\_l\right)
$$
Hence, $\nabla \hat{f}$ and $\nabla \hat{g}$ are gradient of $\hat{f}$ and $\hat{g}$.
**Q(3) The effecte of T is strange. Usually, a large T can achieve better performance**
**A(3)** In fact, if the optimization problem is a deterministic one (where all samples are processed in each iteration), then indeed a larger T would be better as it allows the inner loop to directly get the optimal state.
$$
\hat{\boldsymbol{\tau}}\^\*=\underset{\boldsymbol{\tau}, \boldsymbol{P}\_a, \boldsymbol{N}\_a}{\arg \min } \hat{g}(\boldsymbol{w}, \boldsymbol{\tau}):=\frac{1}{n\_\tau} \sum\_{l=1}\^{n\_\tau} \hat{\mathcal{L}}\_{e q}\left(\boldsymbol{w}, \tau\_l, c\_l\right)
$$
However, the optimization problem we are studying is a stochastic one (where only a subset of samples is processed in each iteration), so a larger T is not necessarily better. This is because with a larger T, the inner optimization only selects a subset of samples, resulting in the optimal value of the inner function being only the optimal parameters for the current set of samples, rather than the global optimal parameters. As a result, it is easy to get stuck in a local optimum, leading to poor generalization performance.
$$
\hat{\boldsymbol{\tau}}\^\*(\mathcal{B})=\underset{\boldsymbol{\tau}, \boldsymbol{P}\_a, \boldsymbol{N}\_a}{\arg \min } \hat{g}(\boldsymbol{w}, \boldsymbol{\tau};\mathcal{B}):=\frac{1}{n\_\tau} \sum_{l=1}\^{n\_\tau} \widehat{\mathcal{L}}\_{e q}\left(\boldsymbol{w}, \tau\_l, c\_l;\mathcal{B}\right)
$$
$$
\hat{\boldsymbol{\tau}}\^\*\neq \hat{\boldsymbol{\tau}}\^\*(\mathcal{B})
$$
where $\mathcal{B}=\{\boldsymbol{x}\_i,y\_i\}\_{i=1}\^B$ is a sampled batch of data, $ \hat{\boldsymbol{\tau}}\^\*(\mathcal{B})$ only suits for data distribution of $\mathcal{B}$.
It's notable that when batch size $B$ converges to data size $n$, $T$ shoule increase, that means $B\rightarrow n$, $T\rightarrow \infty$.
**Q(4) the notation of $\hat{L}_{WAUC}$ is isolated in proposition 5.1.**
**A(4)** We use $\hat{L}\_{WAUC}$ in Eq.13 $\hat{f}(\boldsymbol{w},\boldsymbol{\tau}^\*):=\hat{L}\_{WAUC}(\boldsymbol{w},\boldsymbol{\tau}^\*)$.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for your response.
Issues concerning the notation and presentation are clarified. It is better to reorganize the mathematical formulas to understand the reader better. I emphasize that the definition should appear before using the corresponding term.
The effect of $T$ is not yet clarified. Can you provide the traceplot of estimators of parameters or loss values along with $T$ (others are okay)? It can differ between datastes, implying the strength of convergence or optimal $T$ probelms.
---
Reply to Comment 1.1.1:
Title: Author response to Reviewer Trb9
Comment: Apologies for the lack of clarity regarding T. Allow us to reframe the explanation of the effect of T from the perspectives of optimization and generalization.
- Optimization: From an optimization standpoint, it is intuitive that when T is sufficiently large, the model will converge to the optimal values of the inner parameters within the current batch. However, it is also important to note that optimizing the optimal solution within the current batch only guarantees a local optimum, not a global one.
- Generalization: From a generalization perspective, when T is sufficiently large, the model will fit the optimal inner parameters for each batch. However, when encountering a new batch with significant differences in data, the optimal inner parameters from previous batches will not be applicable, leading to a sharp decline in the model's performance.
We conducted several experiments to demonstrate the effect of T. By setting different values for T, we examined its impact on the overall loss. The experimental results can be viewed from the following link:
- [T=5](https://anonymous.4open.science/r/WAUC-9B9B/T_5.png)
- [T=10](https://anonymous.4open.science/r/WAUC-9B9B/T_10.png)
- [T=15](https://anonymous.4open.science/r/WAUC-9B9B/T_15.png)
- [T=20](https://anonymous.4open.science/r/WAUC-9B9B/T_20.png)
It is evident that with the continuous increase of T, the fluctuation of loss between two adjacent K's rises sharply. Therefore, a reasonable T value serves as a guarantee for overall optimization and generalization, rather than the notion that bigger is always better. | Summary: This paper proposes a weighted AUC (WAUC) loss that is robust to both class distribution shift and cost distribution without class and cost priors. A bilevel optimization paradigm is proposed to bridge WAUC and cost. The authors propose a stochastic optimization algorithm for WAUC, and prove its convergence rate. Extensive experiments are conducted to evaluate the proposed approach.
Strengths: - This paper is well organized and easy to read. The main goals and core challenges are listed clearly, and solved one by one.
- A novel cost-sensitive setting is proposed in this paper, where the cost is obtained by sampling instead of an available prior.
- The proposed WAUC is robust to both distribution shift and cost distribution, whereas AUC/PAUC and cost learning fail to achieve the two robustness simultaneously.
- Sound theoretical analysis is presented, including the convergence of the WAUC estimation and the convergence rate of the proposed bilevel optimization.
Weaknesses: - Complexity analysis of the proposed optimization approach is lacking.
- The error bars are not provided.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - How are the time and space complexities of Algorithm 1 compared with AUC optimization or other comparable approaches?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
## Author response to Reviewer N1ih
Thank you for your detailed and constructive feedback on the paper. We value your insights and have taken your suggestions into consideration. Here are our responses to your specific comments.
**Q(1) Complexity analysis of the proposed optimization approach is lacking**
**A(1)** Thank you for your suggestion!
Firstly, we analysis the time complexity (one iteration) of our methods and baselines.
- WAUC-Gau (WAUC method): $O(n_\tau n_+ + n_\tau n_-)$
- ExAUC (AUC method): $O(n_+n_-)$
- ECL (cost-sensitive learning method): $O(n_\tau n_+ + n_\tau n_-)$
We conduct some experiments for time complexity with a fixed epoch with varying $n_+$ and $n_-$. All experiments are conducted on an Ubuntu 16.04.1 server with an Intel(R) Xeon(R) Silver 4110 CPU (to get rid of the affect of parallel computing). For every method, we repeat running 10,000 times and record the average running time. We only record the loss calculation time and use the python package time.time() to calculate the running time.
|method/unit:s|$n_+,n_-=128$|$n_+,n_-=256$|$n_+,n_-=512$|$n_+,n_-=1024$|$n_+,n_-=2048$|
|:---:|:---:|:---:|:---:|:---:|:---:|
|BCE|1.352|1.856|3.285|6.195|11.957|
|ExAUC|1.481|1.952|3.951|7.592|12.903|
|SqAUC|1.380|1.988|4.041|7.813|12.203|
|NWAUC|1.648|2.268|4.241|8.853|16.947|
|PAUC-exp|1.380|1.968|3.741|7.748|13.374|
|PAUC-poly|1.402|2.075|4.013|7.983|14.183|
|PAUCI|2.085|3.597|6.592|10.967|22.571|
|CS-hinge|1.880|4.193|7.893|11.213|23.846|
|AdaCOS|2.197|3.896|6.871|13.414|20.487|
|ECL|1.974|3.268|5.862|10.831|17.127|
|WAUC-Gau|1.980|2.975|4.587|8.681|16.976|
|WAUC-Log|1.897|2.790|4.924|8.487|16.891|
**The results indicate that there is no significant difference in the running time of the WAUC method compared to other binary classification methods.**
**Q(2) The error bars are not provided**
**A(2)** In our experimental setup, we indeed run each method multiple times and take the average. However, due to space limitation in the paper, we omitted the inclusion of standard deviation in the experimental table. We have included standard deviation in Table 3, please click [clickable url](https://anonymous.4open.science/r/WAUC-9B9B/error_bar.png) to open. We plan to include the detailed information in future revisions of the paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their reply. The rebuttal solved my concerns, and I am inclined to keep my score. | Summary: This paper considers usage of the AUC, in a cost sensitive setting, i.e. where miss classification cost is not uniform. Extensions of the AUC have been considered on parametrised cost distributions, such as the WAUC. In this paper the authors aim to develop a cost sensitive extension to the AUC that does not depend on prior information of the cost distribution. To do this the authors propose a bilevel optimisation problem, where the inner loop estimates the optimal threshold of the scoring function, according to the cost, and then the outer loop estimates the WAUC. The performance of their method is evaluated on several real world datasets, along with sensitivity analysis.
Strengths: The authors carry out experiments on a wide variety of data sets with a rich set of benchmarks.
Weaknesses: I do not understand the statement of Proposition 5.1. To me it is not a proposition but rather the definition of the estimator $\hat{WAUC}$. The proof of proposition 5.1 is similarly confusing and seems to be a reformulation of the estimator $\hat{WAUC}$ which is then used in the proof of Lemma 5.2. Also I assume $\tau_k$ in eq 22 is a typo and should be $\tau$? And that Lemma 5.2 holds when $\tau^*$ is known, as opposed to $\tau$, another typo.
Lemma 5.2 itself is vague, it does not specify how the total number of instances must grow with the negative instances, only that it must be “large enough”. There are no results on the rate of convergence. Also there is no comment on what happens when $\tau^*$ is not known and we instead use the estimator, as is the case in the bilevel optimisation.
Theorem 5.3 is also vague, the estimator for the convex formulation of the empirical loss is said to have the same minimum when the “parameters satisfy the penalty”. The exact penalty in question is not clear to me.
There is no proof of Theorem 6.3, are we to assume it follows immediately from [5]? I do not see how the results of [5] lead immediately to Theorem 6.3.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: There is no proof of Theorem 6.3, are we to assume it follows immediately from [5]? I do not see how the results of [5] lead immediately to Theorem 6.3.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Potential limitations are given good consideration. The authors use a convex estimator of the none convex cost function when solving for the optimal threshold, the theoretical convergence of said estimator holds only when certain constants, $M, \kappa, M'$ are sufficiently large, leading to a potential limitation when used in practice. The authors explore this, observing the difference between the optimal threshold, when using the original and convex estimator, of the cost function, for several fixed values of $M, \kappa, M'$.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Author response to Reviewer uAig
**Q(1-1) Proposition 5.1 is not a proposition**
**A(1-1)** We present it as a proposition because by Lemma 5.2, we can derive a convergence result for WAUC, showing that $\|\hat{WAUC}-WAUC\|$ converges at a rate of $O(\sqrt{\frac{\log n_-}{n_- m}})$. Please refer to Q(2-1) for more details.
**Q(1-2) $\tau_k$ in eq 22 is a typo and should be $\tau$?**
**A(1-2)** Thank you very much for your keen observation, this is a typo.
**Q(1-3) Lemma 5.2 holds when $\tau\^\*$ is known, as opposed to $\tau$, another typo?**
**A(1-3)** Thank you for pointing that out, this is not a typo. The meaning of Lemma 5.2 is that for any given $\tau$, there is asymptotic convergence, and $\tau^*$ is not necessary.
**Q(2-1) Lemma 5.2 itself is vague.**
**A(2-1)**
In fact, Lemma 5.2 expresses the asymptotic convergence of $\hat{WAUC}$ and $WAUC$, which can be decomposed into the convergence proof of KDE (please refer to term (c) and term (d) in the appendix). Therefore, the convergence rate of Lemma 5.2 is the same as the convergence rate of KDE. Specifically, the convergence result can be found in Theorem 7 of [1], with a convergence rate of $O(\sqrt{\frac{\log n_-}{n_- m} })$ (where $m$ represents the bandwidth, and in our problem, the KDE dimension $d=1$).
[1] Jiang H. Uniform convergence rates for kernel density estimation[C]//International Conference on Machine Learning. PMLR, 2017: 1694-1703.
**Q(2-2) What happens when $\tau^\*$ is not known**
**A(2-2)** When the threshold is not optimal, the outer objective function, $WAUC$, is optimized based on the current threshold $\tau$. However, since the threshold $\tau$ is continuously optimized and converges to $\tau^*$, as the number of iterations increases, the outer $WAUC$ also converges to the optimal value $WAUC\^\*$ along with $\tau$.
**Q(3) Theorem 5.3 is also vague.**
**A(3)** We apologize for our unclear expression. We have provided the penalty function condition in lines 543-544 of the proof of Theorem 5.3: $\kappa, M, M'\rightarrow \infty$ where $\kappa$ comes from $\psi(x)=\frac{\log(1+\exp(\kappa x))}{\kappa}$.
**Q(4) There is no proof of Theorem 6.3.**
**A(4)** According to [5]'s result (As quoted from [5], page 21) , we have the following convergence rate for bilevel optimization.
In our paper, we change some symbols, which can be describes by following tables
| Symbol | Symbol in [5] | Symbol in ours |
|:---:|:---:|:---:|
| Lipschitz continuity of $f$ | $\ell_{f,0}$ | $L_{f,0}$ |
| Lipschitz continuity of $\nabla f$ | $\ell_{f,1}$ | $L_{f,1}$ |
| Lipschitz continuity of $\nabla g$ | $\ell_{g,1}$ | $L_{g,1}$ |
| Strong convexity of $g$ w.r.t. $\tau$ | $\mu_g$ | $\mu$ |
In our problem, $\rho_g:=\frac{2 \mu L_{g, 1}}{\mu+L_{g, 1}}$, then we have:
$$
\bar{\alpha}\_1=\frac{1}{2 L\_F+4 L\_f L\_y+2 L\_f L\_{y x} \left(L\_y \eta\right)}, \quad \bar{\alpha}\_2=\frac{16 T \mu L\_{g, 1}}{\left(\mu+L\_{g, 1}\right)^2\left(8 L\_f L\_y+2 \eta L\_{y x} \tilde{C}\_f^2 \bar{\alpha}\_1\right)}
$$
we select the following stepsizes as
$$
\alpha\_k=\min \left\\{\bar{\alpha}\_1, \bar{\alpha}\_2, \frac{1}{\sqrt{K}}\right\\} \quad \beta\_k=\frac{8 L\_f L\_y+2 \eta L\_{y x} \tilde{C}\_f^2 \bar{\alpha}\_1}{4 T \mu} \alpha\_k
$$
With the above choice of stepsizes, (53) in [5] can be simplified as
$$
\mathbb{E}\left[\mathbb{V}\^{k+1}\right]-\mathbb{E}\left[\mathbb{V}\^k\right] \leq-\frac{\alpha\_k}{2} \mathbb{E}\left[\left\|\nabla F\left(\boldsymbol{w}\_k\right)\right\|^2\right]+c\_1 \alpha_k^2 \sigma\_{g, 1}^2+\alpha\_k b\_k\^2+c\_2 \alpha\_k\^2 \tilde{\sigma}\_f\^2
$$
where the constants $c_1$ and $c_2$ are defined as
$$
\begin{aligned}
c\_1 & =\frac{L\_f}{L\_y}\left(1+2 L\_f L\_y \bar{\alpha}\_1+\frac{\eta L\_{y x} \tilde{C}\_f^2}{4} \bar{\alpha}\_1\^2\right)\left(\frac{8 L\_f L\_y+\eta L\_{y x} \tilde{C}\_f^2 \bar{\alpha}\_1}{4 \rho_g}\right)^2 \frac{1}{T} \\\\
c\_2 & =\left(\frac{L\_F}{2}+L\_f L\_y+\frac{L\_{y x} L\_f}{4 \eta L\_y}\right) .
\end{aligned}
$$
Then telescoping leads to
$$
\frac{1}{K} \sum_{k=0}\^{K-1} \mathbb{E}\left[\left\\|\nabla F\left(x\^k\right)\right\\|\^2\right] \leq \begin{matrix}
&\frac{2M_0}{K \min\{\bar{\alpha}\_1,\bar{\alpha}\_2\}}\quad +&\frac{2\mathbb{V}\_0}{\alpha \sqrt{K}}\quad+&2 b\_k\^2\quad+&\frac{2c\_1\alpha}{\sqrt{K}}\sigma\_{g,1}^2\quad+&\frac{2c\_2\alpha}{\sqrt{K}}\tilde{\sigma}\_f\^2\\\\
&\downarrow &\downarrow &\downarrow &\downarrow &\downarrow \\\\
&O\left(\frac{1}{K}\right) & O\left(\frac{1}{\sqrt{K}}\right) & O\left(\frac{1}{K}\right) & \text{term} (1) & O\left(\frac{1}{\sqrt{K}}\right)
\end{matrix}
$$
$$
\begin{aligned}
\text{term} (1) &\overset{(a)}{=} \underbrace{2\alpha \sigma\_{g, 1}^2 \frac{L\_f}{L\_y}\left(1+2 L\_f L\_y \bar{\alpha}\_1+\frac{\eta L\_{y x} \tilde{C}\_f\^2}{4} \bar{\alpha}\_1\^2\right)\left(8 L\_f L\_y+\eta L\_{y x} \tilde{C}\_f\^2 \bar{\alpha}\_1\right)\^2 \left(\frac{\mu+L\_{g,1}}{8\mu L\_{g,1}}\right)\^2}_{\gamma } \frac{1}{T\sqrt{K}}\\\\
&\overset{(b)}{=} \gamma \left(\frac{3 M \kappa e\^\kappa \/\left(e\^\kappa+1\right)\^2+L\_{g, 1}}{24 M \kappa e\^\kappa \/\left(e\^\kappa+1\right)\^2 L\_{g, 1}}\right)\^2\frac{1}{T\sqrt{K}}
\end{aligned}
$$
where $(a)$ comes from $\rho\_g=\frac{2 \mu L\_{g, 1}}{\mu+L\_{g, 1}}$ and $(b)$ comes from $\mu=\frac{3M\kappa e\^\kappa}{(e\^\kappa+1)\^2}$. Then we have:
However, $O(1/\sqrt{K})$ is slower than $O(1/K)$, hence, we adopt $O(1/\sqrt{K})$ as the final result.
$$
\frac{1}{K} \sum\_{k=0}^{K-1} \mathbb{E}\left[\left\\|\nabla F\left(\boldsymbol{w}\_{k}\right)\right\\|\^{2}\right] \leq \gamma\left(\frac{3 M \kappa e\^{\kappa} \/ \left(e^{\kappa}+1\right)\^{2}+L\_{g, 1}}{24 M \kappa e\^{\kappa} \/ \left(e\^{\kappa}+1\right)\^{2} L\_{g, 1}}\right)\^{2} \frac{1}{T \sqrt{K}}+O\left(\frac{1}{\sqrt{K}}\right)
$$
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response to my review. As the discussion period is short I will ask any further questions as they come up.
For Lemma 5.2, the estimator $\widehat{WAUC}$, as defined in question (8), requires knowledge of $\mathcal{D}_\tau$ through $p(\tau)$ anyway, so I do not see the relevance of whether the empirical (boldface) $\mathbf{\tau}$ is known?
---
Reply to Comment 1.1.1:
Title: Author response to Reviewer uAig (Reclaimed)
Comment: Thank you for your detailed and constructive feedback on the paper. We agree with the reviewer that the theory itself does not require to know the prior of $\tau$. We only intend to emphasize that the calculation of $\widehat{WAUC}$ requires knowing $\tau$. We'll revise the proposition according to your suggestion.
Practically, to estimate the distribution of $\tau$, we have to sample the empirical costs as data points. For each sampled cost, we employ the inner problem of (OP0) to find the corresponding threshold. In this way, we can find an empirical estimation of $\tau$ to estimate the population distribution. | Summary: In this paper, a bi-level optimization method is proposed for binary classification with unknown cost distributions. The motivation is to propose an adaptive method to deal with different class and cost distributions, getting rid of the assumption of traditional AUC which assumes the uniform cost distribution. The key idea lies in treating the prediction threshold as an learnable parameter, and to utilize bi-level optimization to learn prediction threshold and model parameters jointly. The proposed method is test under several benchmark datasets for verifying its usefulness.
Strengths: - The paper is mostly well-written so that it is easy to grasp the key ideas and major results.
- The proposed method is companioned with theoretical guarantees.
Weaknesses: - Only binary classification is studied. For binary classification, cost-sensitive learning and the AUC metric are both thoroughly studied. Therefore, the contribution is not quite significant.
- Bi-level optimization is a standard technique to optimize hyper-parameters like prediction threshold. Thus the technical contribution is somehow limited.
Further suggestions:
In fig. 2 and the experiments, the benchmark datasets such as CIFAR-10/100 are multi-class ones. While the paper makes use of binary class versions of the datasets. It is necessary to describe the classes chosen in the experiments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In most cost-sensitive binary classification tasks, it is sufficient to assume that the cost for one class of instances remain the same. While in this paper, it seem to assume that the cost can be varied. It would be nice to discuss in what kind of real applications this assumption is necessary.
------
Acknowledgement:
I would like to thank the authors for the efforts on the responses and the improvements on the paper. Even though my concerns are not fully addressed, in special the real-world applicability, it would be very nice to include the improvements in the future versions of the paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I didn't find out potential negative societal impact of the paper. While I encourage to include more discussions on technical limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Author response to Reviewer mY4P
Thank you for your detailed and constructive feedback on the paper. We value your insights and have taken your suggestions into consideration. Here are our responses to your specific comments.
**Q(1) Only binary classification is studied. For binary classification, cost-sensitive learning and the AUC metric are both thoroughly studied**
**A(1)** While there have been many studies on binary classification problems, both AUC and cost-sensitive learning have their own limitations. The main goal of this paper is to address the shortcomings of existing binary classification methods.
- Cost-sensitive learning: The trained model is not robust to class distribution shift in the test.
- AUC learning: The trained model is not robust to cost distribution in the test.
WAUC learning: The trained model can be robust to cost distribution and class distribution shift simultaneously in the test. **Furthermore, the problem we investigate, namely cost-sensitive robust learning, is grounded in real-world context. Please refer to Q(4) for more details.**
**Q(2) Bi-level optimization is a standard technique to optimize hyper-parameters like prediction threshold. Thus the technical contribution is somehow limited**
**A(2)** Bilevel optimization is a classical approach to optimization. In our methodology, the utilization of bilevel optimization allows for an elegant solution to optimization problems. Our primary contribution lies in the proposal of the WAUC form, which addresses the drawbacks of existing AUC optimization and cost-sensitive learning techniques, along with providing a well-grounded algorithm for its optimization. Bilevel optimization can be viewed as an essential tool for problem-solving.
**Q(3) CIFAR-10/100 are multi-class It is necessary to describe the classes chosen in the experiments**
- Binary CIFAR-10-Long-Tail Dataset.
The CIFAR-10 dataset contains 60,000 images, each of 32 * 32 shapes, grouped into 10 classes of 6,000 images. The training and test sets contain 50,000 and 10,000 images, respectively. We construct the binary datasets by selecting one super category as positive class and the other categories as negative class. We generate three binary subsets composed of positive categories, including 1) birds, 2) automobiles, and 3) cats.
- Binary CIFAR-100-Long-Tail Dataset. The original CIFAR-100 dataset has 100 classes, with each containing 600 images. In the CIFAR-100, there are 100 classes divided into 20 superclasses. By selecting a superclass as a positive class example each time, we create CIFAR-100-LT by following the same process as CIFAR-10-LT. The positive superclasses consist of 1) fruits and vegetables, 2) insects, and 3) large omnivores and herbivores, respectively.
**A(3)** We have provided the methodology for creating the dataset, with specific details outlined in Appendix B.2 Dataset Details.
**Q(4) It would be nice to discuss in what kind of real applications this assumption is necessary**
**A(4)** In reality, some applications like financial markets prediction ([real-world scenario, click this link to open](https://www.kaggle.com/competitions/jane-street-market-prediction/overview)) involve investment issues. Every investment has a cost involved, including but is not limited to:
- When the company is well-funded and the market conditions are relatively good, missing investment opportunities are more costly (true action: trade, prediction: pass).
- When a company is short of fund and the market performance is poor, blind investment is more costly (true action: pass, prediction: trade).
The real cost situation must be more complex than the above and is far beyond a simple beta distribution. Developing trading strategies to identify and take advantage of inefficiencies is challenging. Even if a strategy is profitable now, it may not be in the future, and market volatility makes it hard to predict the profitability of any given trade with certainty. Hence, we propose our method to build the WAUC estimator over non-parametric cost distribution for fit this type of application.
---
Rebuttal Comment 1.1:
Title: more discussions on real-world applications
Comment: Thanks for the responses.
I indeed understand that cost-sensitive situation appears in many real applications. While my feeling is that even though many approaches and performance measures, in special the classical ones like AUC, may indeed have drawbacks in theory, while the drawback doesn't really affect the effectiveness in most real-world applications. Thus I would like to see solid real-world applications that new performance measures or approaches are essential.
---
Reply to Comment 1.1.1:
Title: Author response to Reviewer mY4P
Comment: In reality, some applications like financial markets prediction ([real-world scenario, click this link to open](https://www.kaggle.com/competitions/jane-street-market-prediction/overview), **we conduct experiments on this dataset in the origin paper**) involve investment issues. For each transaction, there are two actions: TRADE or PASS. Assume the same and a small number of amounts per transaction to ensure that there are no large losses and high-frequency trading to earn money. In this practical application, we need to use the available information to analyze which action to choose in our next decision. It is worth noting that the cost of choosing different actions is not consistent.
| n-th transaction | TRADE (truth) | PASS (truth) |
| :--: | :--: | :--: |
| TRADE (prediction) | 0 | cost: $c_{+}$ |
| PASS (prediction) | cost: $c_{-}$ | 0 |
- When the company is well-funded and the market conditions are relatively good, missing transaction opportunities are more costly (true action: trade, prediction: pass).
- When a company is short of the fund and the market performance is poor, blind transaction is more costly (true action: pass, prediction: trade).
The real cost situation must be more complex than the above. Developing trading strategies to identify and take advantage of inefficiencies is challenging. Even if a strategy is profitable now, it may not be in the future, and market volatility makes it hard to predict the profitability of any given trade with certainty.
Moreover, we not only need to consider minimizing cost, but we also need to ensure that the final profit is maximized. For traditional cost-sensitive learning, unbalanced class distributions as well as outliers in the data can affect the decision making and thus lead to lower profit. One way to address these issues is to use a combination of the cost function and a performance metric, such as WAUC, to optimize the model. Hence, we propose our WAUC estimator over complex cost distribution to fit this type of application.
---
Reply to Comment 1.1.2:
Title: Author response to Reviewer mY4P
Comment: We thank the reviewer for taking the time to go through the rebuttal. We have give the response for your question.
We have sincerely noted all the points related to the presentation and will surely be improving them in the final version of the draft. We will be grateful if the reviewer can provide us with a list of remaining concerns, which we will address in the remaining time.
We further request the reviewer to update the main review to reflect the new score. Please feel free to contact us for any further questions. | Rebuttal 1:
Rebuttal: ### Dear the ACs, and the Reviewers, Thank you so much for your valuable comments! They really helped us improve our manuscript!
In order to facilitate reviewers' comprehension of our paper, we want to summarize our contributions again:
- **We propose a setting that focuses on the robustness of the model to the class distribution and cost distribution simultaneously.** This setting treats cost as data that can be sampled, not as prior information, which is closer to the real-world cost-sensitive scenario.
- **We present a bilevel paradigm where the inner cost function is an inner constraint of outer WAUC optimization.** For sake of optimization, we reformulate this paradigm into a nonconvex-strongly convex bilevel form. Moreover, we employ a stochastic optimization algorithm for WAUC (SACCL), which can solve this problem efficiently.
- We conduct extensive experiments on multiple imbalanced cost-sensitive classification tasks. The experimental results speak to the effectiveness of our proposed methods. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose a method that combines WAUC (weighted Area under ROC curve) learning with cost-sensistive learning. They propose a bilevel optimization algorithm to solve the formulated problem and provide theoretical analysis for convergence. According to their experiments on three datasets, the practical performance is good. The idea is interesting and the method should be novel.
Strengths: 1) The presentation is good. The authors raise their motivation (combines AUC with cost learning) at the beginning. And the writing flow is clear for the paper. They have several good figures to illustrate the motivations.
2) Overall, the soundness is good. Apart from basic writings, authors also provide both practical comparison with some baselines and theoretical analysis for their algorithm convergence.
Weaknesses: 1) The method is designed to be both class distribution robust and cost distribution robust, but doesn't demonstrate better AUC performance in Table 2. Comparing with $\widehat{WAUC}$, the $\widehat{AUC}$ is the more suitable metric for class distribution robustness (besides, the $\widehat{WAUC}$ is defined and only optimized by the proposed method).
2) Miss an ad-hoc baseline that optimizes both AUC and cost sensitive learning objectives simultaneously by simply assigning different weights.
3) The experiments should be repeated multiple times independently and report the mean&standard deviation values.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Besides of the weaknesses, it would be better if authors could provide running time report for the proposed method and other baselines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Author response to Reviewer uhz4
Thank you for your detailed and constructive feedback on the paper. We value your insights and have taken your suggestions into consideration. Here are our responses to your specific comments.
**Q(1-1) WAUC method doesn't demonstrate better AUC performance in Table 2.**
**A(1-1)** We conducted three sets of experiments targeting different cost distributions (uniform, normal, beta). It can be observed that when the cost distribution follows a normal or beta distribution (see Tab.2, Tab.6), our method exhibits a lower AUC value. This is because AUC does not align with cost-sensitive scenarios. **However, when the cost distribution is uniform (see Tab.5 in the appendix), our method achieves state-of-the-art.** WAUC is equivalent to AUC when the cost ratio follows a uniform distribution $U(0,1)$.
**Q(1-2) Comparing with $\hat{WAUC}$, the $\hat{AUC}$ is the more suitable metric for class distribution robustness.**
In fact, AUC can be considered as special versions of WAUC (we point it in introduction, line 30). WAUC is equivalent to AUC when the cost ratio follows a uniform distribution $U(0,1)$. **We have conduct experiment in Tab. 5 which demonstrate that WAUC achieves state-of-the-art.**
Therefore, by definition, $\hat{WAUC}$ emerges as the more appropriate metric for class distribution robustness. Moreover, we conducted relevant experiments in Cifar-10-Subset-1 to compare the differences between AUC and WAUC by altering various levels of class imbalance. Assuming that we have two models (for both of them, we solve the optimal posterior threshold by $\hat{\mathcal{L}}_{COST}$ after training):
- [a] model trained on AUC method, $c\sim U(0,1)$;
- [b] model trained on WAUC method, $c\sim U(0,1)$;
Given different $\pi(n_+/(n_++n_-))$ of data, The results are shown in the following table.
| method | $\pi=0.1$ | $\pi=0.2$ | $\pi=0.3$ | $\pi=0.4$ | $\pi=0.5$ | $\pi=0.6$ | $\pi=0.7$ | $\pi=0.8$ | $\pi=0.9$ |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [a] | 0.285 | 0.271 | 0.253 | 0.217 | 0.201 | 0.192 | 0.175 | 0.186 | 0.199 |
| [b] | 0.280 | 0.268 | 0.241 | 0.213 | 0.200 | 0.193 | 0.171 | 0.182 | 0.190 |
It's clear that there is no discernible disparity in class robustness between AUC and WAUC.
**Q(2) Miss an ad-hoc baseline that optimizes both AUC and cost sensitive learning objectives simultaneously by simply assigning different weights.**
**A(2)** Thank you very much for your suggestion. We conducted additional experiments incorporating AUC and cost-sensitive learning weighted baselines. The specific results are listed in the table below ($c\sim \mathcal{N}(0.5, 1)$):
| method | Cifar-10-Subset-1 / $\hat{AUC}$ | Cifar-10-Subset-1 / $\hat{WAUC}$ | Cifar-10-Subset-1 / $\hat{\mathcal{L}}_{COST}$ | Cifar-100-Subset-1 / $\hat{AUC}$ | Cifar-100-Subset-1 / $\hat{WAUC}$ | Cifar-100-Subset-1 / $\hat{\mathcal{L}}_{COST}$ |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|0.5*ExAUC+0.5*ECL|0.815|0.524|0.026|**0.917**|0.543|0.016|
|0.5*SqAUC+0.5*ECL|0.795|0.477|0.027|0.876|0.497|0.021|
|0.8*ExAUC+0.2*ECL|**0.839**|0.519|0.025|0.921|0.465|0.018|
|0.8*SqAUC+0.2*ECL|0.804|0.487|0.025|0.880|0.503|0.025|
|0.2*ExAUC+0.8*ECL|0.810|0.506|0.027|0.903|0.509|0.019|
|0.2*SqAUC+0.8*ECL|0.768|0.518|0.025|0.854|0.503|0.019|
|WAUC-Gau|0.787|**0.679**|0.024|0.842|**0.745**|0.015|
|WAUC-Log|0.820|0.653|**0.023**|0.906|0.719|**0.012**|
The results demonstrate that our algorithm has achieved state-of-the-art performance in terms of cost.
**Q(3) The experiments should be repeated multiple times independently and report the mean&standard deviation values.**
**A(3)** In our experimental setup, we indeed run each method multiple times and take the average. However, due to space limitation in the paper, we omitted the inclusion of standard deviation in the experimental table. We have included standard deviation in Table 3, please click [clickable url](https://anonymous.4open.science/r/WAUC-9B9B/error_bar.png) to open. We plan to include the detailed information in future revisions of the paper.
**Q(4) It would be better if authors could provide running time report for the proposed method and other baselines**
**A(4)** Thank you for your suggestion!
Firstly, we analysis the time complexity (one iteration) of our methods and baselines.
- WAUC-Gau (WAUC method): $O(n_\tau n_+ + n_\tau n_-)$
- ExAUC (AUC method): $O(n_+n_-)$
- ECL (cost-sensitive learning method): $O(n_\tau n_+ + n_\tau n_-)$
We conduct some experiments for time complexity with a fixed epoch with varying $n_+$ and $n_-$. All experiments are conducted on an Ubuntu 16.04.1 server with an Intel(R) Xeon(R) Silver 4110 CPU (to get rid of the affect of parallel computing). For every method, we repeat running 10,000 times and record the average running time. We only record the loss calculation time and use the python package time.time() to calculate the running time.
|method/unit:s|$n_+,n_-=128$|$n_+,n_-=256$|$n_+,n_-=512$|$n_+,n_-=1024$|$n_+,n_-=2048$|
|:---:|:---:|:---:|:---:|:---:|:---:|
|BCE|1.352|1.856|3.285|6.195|11.957|
|ExAUC|1.481|1.952|3.951|7.592|12.903|
|SqAUC|1.380|1.988|4.041|7.813|12.203|
|NWAUC|1.648|2.268|4.241|8.853|16.947|
|PAUC-exp|1.380|1.968|3.741|7.748|13.374|
|PAUC-poly|1.402|2.075|4.013|7.983|14.183|
|PAUCI|2.085|3.597|6.592|10.967|22.571|
|CS-hinge|1.880|4.193|7.893|11.213|23.846|
|AdaCOS|2.197|3.896|6.871|13.414|20.487|
|ECL|1.974|3.268|5.862|10.831|17.127|
|WAUC-Gau|1.980|2.975|4.587|8.681|16.976|
|WAUC-Log|1.897|2.790|4.924|8.487|16.891|
**The results indicate that there is no significant difference in the running time of the WAUC method compared to other binary classification methods.**
---
Rebuttal Comment 1.1:
Title: Thanks for your rebuttal
Comment: The reviewer has read the rebuttal and appreciate the efforts made by the authors. The most of the concerns are resolved by the clarifications (except the multiple experiment repeats with standard deviation are only from one experiment setting in the anonymous link). The reviewer is willing to increase the evaluation from 5 to 6, given that the authors plan to include the more details in this revision. | null | null | null | null | null | null |
Distributionally Robust Ensemble of Lottery Tickets Towards Calibrated Sparse Network Training | Accept (poster) | Summary: - The authors proposed a novel Distributionally Robust Optimization (DRO) framework to achieve an ensemble of lottery tickets toward calibrated network sparsification.
- The proposed DRO ensemble aimed to learn multiple diverse and complementary sparse sub-networks with the guidance of uncertainty sets, which encourage winning tickets to capture different data distributions from easy to hard gradually.
- The authors theoretically justified the strong calibration performance by showing how the proposed DRO guarantees to lower the confidence of incorrect predictions.
- Extensive experimental results on several benchmarks demonstrated that the proposed DRO leads to a clear calibration improvement without sacrificing accuracy and burdening inference costs.
Strengths: (+) The authors proposed a novel sparse ensemble framework (DRO) that combines competitive sparse sub-networks to achieve better calibration performance with the scheduling of the learning of complementary ensemble sub-networks (tickets).
(+) The proposed robust training process guaranteed to lower the confidence of incorrect predictions and strong calibration performances.
(+) Extensive empirical results demonstrated the proposed lottery ticket ensemble's effectiveness in competitive classification and open-set detection.
Weaknesses: (-)The proposed DRO conducted experiments using (M==3) subnetworks and showed each sub-networks performance and confidence scores (Appendix). However, there is no ablation study on these subnetworks regarding V or V’ feature representations. These representations could help understand the subnetwork’s ensemble better.
(-) Line 258: (Regarding feature representations), is there any evidence or observation that the ERM model learns to memorize some noisy feature v’ introduced through specific spurious correlations?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: (-) line 155 (typos): to follow a multivariate Gaussian distribution with the red dot representing its mean? → gray dot?
(-) line 164: I could not understand the following sentence: “Starting from the second sub-network, the training distribution is changed according to the losses.”
(-) The subnetwork’s feature-level analysis could help understand this work.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Please, see above the weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the valuable comments/suggestions. We summarize our responses as follows.
**Q1: Ablation study regarding V or V’ feature representations.**
Thank you for this great suggestion! Following the reviewer's idea, we conduct additional experiments on the Waterbird dataset [1], which contains explicit spurious correlations. Specifically, in this dataset, there are two classes: (a) waterbird and (b) landbird. Most of the waterbirds images are taken in the water background whereas landbirds images are taken in the land background. Hence, the model will have a tendency to make an association between the background and the type of bird instead of focusing on the true underlying features of the birds (e.g., color of the feature, etc).
The Table 3 (in the **attached pdf of the general rebuttal**) summarizes the the data distribution. There are limited data samples without the spurious correlation in the training set and therefore the model is likely to predict based on the background instead using the true features. Compared to the training set, validation and testing sets are less skewed and therefore evaluation on testing set will no longer be favored by only focusing on the spurious correlation.
The Table 4 (b) (in the **attached pdf of the general rebuttal**) shows the performance from the sparse network ensemble SNE and DRE with a 15\% total sparsity. In the table, Original indicates the original testing set provided in Table 3 of the attached pdf file whereas, spurious is the one where only data samples with spurious correlations (i.e., waterbird on a water background and land bird on a land background) are considered and non-spurious only indicates the samples without spurious correlation (i.e., waterbird on a land background and a landbird on water background). There are two key observations. First, SNE performs similarly with the DRE in case of samples holding spurious correlations (i.e., spurious only) where the overconfident predictions are usually preferred as they are most likely to be correct benefiting from the spurious correlation. Second, in the case of non-spurious only, DRE achieves better performance both in terms of accuracy and ECE, which justifies that our model indeed learns from important features instead of spurious correlations. In the original test set, because of the large number of samples holding spurious correlation, we do not see a clear advantage in terms of accuracy. However, DRE still achieves a clearly better calibration performance compared to SNE.
**Q2: Evidence or observation that the ERM model learns to memorize some noisy feature v’ introduced through specific spurious correlations**
Thanks you for this insightful comments. In fact, we have provided the evidence that the ERM model learns to memorize noisy features introduced through the spurious correlations in Figure 6 of Appendix D.11. We show the number of incorrectly classified samples with respect to confidence score using different techniques. Plots a-d use the ERM model, where a majority of samples are concentrated in the high confidence region despite being incorrect. This is because, the model learns to pick the noisy features (i.e., spurious correlations which cause overfitting) in addition to the important learning signals during the training process. As such, during testing, whenever such noise features occur, the model produces overconfident predictions while ignoring the true signals. In contrast, using DRE (plot h) and DRO (plots f and g), the model becomes less confident in those wrong cases, and therefore a majority of the wrong samples are concentrated in the low-confidence region. This is because the DRO technique forces the model to learn from the important signals instead of spurious correlations. As such, the model does not learn to produce many overconfident predictions as by ERM.
**Q3: Clarity on sentence in line 164**
In the case of AdaBoost, we start by training the first sub-network. Then, based on its performance, we assign the importance of each data sample (based on their loss) and train the second sub-network. This means, a higher loss resulting from the first sub-network results in a larger weight to the given sample for the second sub-network, where weight indicates the probability of a given data point being sampled during training. In other words, difficult samples will appear more frequently in the second sub-network compared to the easier one. To train the third sub-network, we again assign higher weights to samples with higher losses from the second sub-network and perform training based on new data distribution based on new weights. We repeat this process. We will make this clear in the revised paper.
**Q4: Typo**
Thanks for carefully checking our paper and identifying the typo. We will fix the issue and improve the presentation of the revised paper.
**Q5: Subnetwork's feature-level analysis**
This is a great suggestion! We have visualized the features of a (non-DRO) subnetwork chooses to focus and made a comparison with features focused by a subnetwork trained through the DRO framework. Please refer to Figure 1 in **the pdf file attached with the general rebuttal**.
**References**
- [1] Sagawa et al. "Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization". ICLR2020.
---
Rebuttal Comment 1.1:
Comment: Thank the author for the detailed rebuttals.
The authors clarified the subnetwork's feature-level analysis through additional ablation studies. However, I wanted to observe the subnetwork-wise representation instead of the subnetwork's representation because this observation would support this work's motivation and effectiveness of ensemble subnetworks.
So, I would decrease my score to borderline accept.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for going through our rebuttal. We would be grateful if the reviewer can further clarify the expected result for the 'subnetwork-wise representation'.
In Figure 1 of attached pdf file we have visualized the the heatmap of convolution 4 layer using the Grad-Cam technique. As shown in the figure, the sparse subntwork in the SNE focuses on the water background instead of focusing on the actual landbird object. This is because, during training process, the sparse subnetwork in SNE is likely to learn to associate the spurious background feature with the true label. Specifically, the model learns to predict landbird whenever there is a land background and waterbird whenever there is a water background. In contrast, using the DRE technique, as demonstrated through the heatmap, the sparse subnetwork focuses on the actual object instead of the background. It is worth mentioning that, because of the overfitting phenomenon and lack of a systematic way for diversification, each sparse subnetwork in SNE behaves in a similar way by focusing on the spurious feature instead of the actual object. In contrast, in the case of DRE, each sparse subnetwork is controlled by the $\eta$ parameter in Eq. 2 (main paper). Specifically, we use a low $\eta$ value (i.e., $\eta\rightarrow 0$) for one of the sparse subnetworks, which will be similar to that of the sparse subnetwork in the SNE. However, for the higher $\eta$ value, it will focus on learning from more difficult samples, including those not holding the spurious correlation. As such, the sparse subnetwork is forced to learn from the actual object instead of through the background. Therefore, the model focuses mostly on actual objects as demonstrated in Figure 1 (b) of the attached pdf file. When we combine these diverse sparse subnetworks in DRE, we will have a better calibration without being confidently wrong like in SNE. In the revised paper, we will add all subnetworks' heatmaps.
We hope this can address the reviewer's question about 'subnetwork-wise representation' and we are happy to provide any additional details if needed. | Summary: In this paper, the author proposes a Distributionally Robust Optimization (DRO) framework, which utilizes the ensemble of multiple sparse sub-networks to improve the network calibration. The author argues that the previous ensemble method, i.e., AdaBoost, will make the sub-network severely underfit the training data, leading to a rather poor generalization capability. To solve this problem, the author proposes Distributionally robust ensemble (DRE) method to obtain complementary sparse sub-networks.
Strengths: * The proposed method is evaluated on different datasets and networks.
* The proposed method achieves better accuracy than the baseline works compared.
Weaknesses: * The author argues that the current sparse training works (LTH and EP) have two limitations, which are: (1) the requirement of pretraining a dense network and (2) the learning objective remains as improving the accuracy up to the original dense networks.
However, the author seems completely ignore the very popular (static/dynamic) sparse training methods, which directly train a sparse network from scratch and does not require extra training epochs for iterative pruning and growing, e.g., [1] [2] [3] [4] [5]. It seems that the author argued limitations are not in those sparse training methods.
[1] SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY
[2] Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
[3] PICKING WINNING TICKETS BEFORE TRAINING BY PRESERVING GRADIENT FLOW
[4] Rigging the Lottery: Making All Tickets Winners
[5] MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
* It is not clear what the content of Fig.1 means. There are no legends for those strips.
It would make the paper easier to read if the author can explain what the “network calibration” means at the beginning of the introduction section.
* I think this is yet another paper that abuses term of “Lottery Tickets”. I really cannot find the connection between the proposed method and the Lottery Tickets hypothesis. This makes reader feel confusing.
* I don’t think it is fair to compare the proposed method with the (original) LTH method.
* The author argues that the methods using pretraining – pruning are costly. But the author does not provide comparison results about the training costs (FLOPs) of the proposed method. It is not clear the 200 training epochs is for each sub-network or for the entire training process.
* It is inappropriate to call the inference FLOPs the inference speed.
* It seems that the prior work [6] is highly related to the proposed method. But the author does not provide any discussion about it. (Only compared with it in the results part.)
[6] Calibrate and prune: Improving reliability of lottery tickets through prediction calibration.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please refer to the weakness part.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the valuable comments/suggestions. We summarize our responses as follows.
**Q1: Authors completely ignore very popular (static/dynamic) sparse training methods.**
Please refer to the answer to Q3 of the general response. To more clearly demonstrate this critical limitation from existing sparse training methods, we conduct additional experiments to evaluate the calibration performance of all the methods suggested by the reviewer. The results are summarized in Table 2 (refer to **the attached pdf in the general rebuttal**). It is clear that all these methods achieve an ECE score similar to another representative sparse training method EP, which is much worse than the proposed DRE framework. Therefore, our work proposes a novel contribution to achieve calibrated sparse network training that is orthogonal and complementary to existing sparse training methods.
**Q2: Clarity of Figure 1.**
In Figure 1, we use the standard Expected Calibration Error (ECE) plot, which is commonly used to visualize the calibration behavior of a model (see Figures 1 \& 4 in [7] as an example). In the ECE plot, the dashed diagonal line indicates that the model is perfectly calibrated where the model's accuracy exactly matches its confidence. The shaded area using the strips indicates that the model's confidence is higher than accuracy, which implies the overfitting behavior. The sky blue shows the model's accuracy for the given confidence value. If the accuracy exceeds the diagonal line, it means that the model's accuracy is higher than its confidence, which implies that the model tends to provide less confidence predictions.
**Q3: Connection between the proposed method and the Lottery Tickets hypothesis.**
We would like to cite several relevant works from existing literature to clearly show the connection between the proposed method and LTH. According to [8] and [9], the strong Lottery Tickets Hypothesis states that "there exists a subnetwork in a randomly initialized neural network such that it already achieves almost the same accuracy as a fully trained network, without any optimization of the weights of the network". The proposed method builds upon Edge Popup, which directly finds a sparse sub-network from a randomly initialized dense network without pre-training and iterative pruning. Therefore, it matches the definition of strong Lottery Tickets Hypothesis.
**Q4: It is not fair to compare the proposed method with LTH method.**
As a representative sparse network training method, we empirically show that LTH also suffers from poor calibration performance through our comparison. Furthermore, compared with LTH, our technique does not involve pre-training followed by iterative pruning and finetuning, making it computationally more efficient. Since we keep all other factors (e.g., data split) the same, we would like to kindly clarify that this is a fair comparison.
**Q5: Training cost of the proposed method.**
We would like to make it clear that each sparse subnetwork is trained using 200 epochs. It should be noted that there is no dependency among those sparse subnetworks (their focus on the data space is automatically adjusted by the DRO framework) and therefore can be run independently making overall training time more efficient. In contrast to our work, LTH variants (especially methods using pretraining-pruning) typically involve sequential process and therefore cannot be run parallel making them time-consuming.
**Q6: Inappropriate to call the inference FLOPs the inference speed.**
Thank you for the suggestion. We will fix the issue and change to inference FLOPs in the revised paper.
**Q7: Discussion about prior work in [6].**
Thank you for the suggestion and we will add a discussion of [6]. We would also like to clarify that [6] directly applies several commonly known calibration techniques on the top of the standard LTH model. Since it does not introduce any new technique, we only briefly discuss it when describing the performance comparison (lines 311-312).
**References**
- [1] SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY
- [2] Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
- [3] PICKING WINNING TICKETS BEFORE TRAINING BY PRESERVING GRADIENT FLOW
- [4] Rigging the Lottery: Making All Tickets Winners
- [5] MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
- [6] Calibrate and prune: Improving reliability of lottery tickets through prediction calibration.
- [7] Guo et al. "On Calibration of Modern Neural Networks". ICML2017.
- [8] Ramanujan et al. "What’s Hidden in a Randomly Weighted Neural Network?". CVPR2020.
- [9] Chijiwa et al. "Pruning Randomly Initialized Neural Networks with Iterative Randomization". NeurIPS2021.
---
Rebuttal 2:
Comment: Dear Reviewer RrYi,
Thank you for providing your reviews and valuable suggestions!
In responding to the raised concerns, we believe that the paper has been strengthened significantly and we thank the reviewer for that. The multiple sparse training methods suggested by the reviewer become very useful to further showcase the effectiveness of our proposed technique. Further, the performance gap between our technique and those existing sparse training methods as shown in our additional experimental results in our rebuttal helps to further strengthen our empirical evaluation. Also, suggestions such as using inference speed instead of FLOPs in the Table 10 would be helpful to make the paper more informative. The suggestion regarding the clarity of Figure 1 would be very helpful to make Figure 1 easier to understand.
We hope that the reviewer finds our answers satisfactory and considers updating the assessment accordingly! We will be more than happy to provide any additional clarifications if needed. | Summary: The paper utilizes Distributionally Robust Optimization (DRO) framework to achieve an ensemble of lottery subnetworks for better calibration performance. Recently developed sparse network training methods, such as Lottery Ticket Hypothesis (LTH) and its variants, largely focus on sparsifying deep networks and realizing comparable accuracy to dense counterparts but neglect network calibration. The proposed DRO ensemble aims to learn multiple diverse and complementary sparse sub-networks with the guidance of uncertainty sets, which encourage tickets to gradually capture different data distributions from easy to hard and naturally complement each other. The authors theoretically justify the strong calibration performance of their proposed robust training process and show extensive experimental results on several benchmarks, demonstrating clear calibration improvement without sacrificing accuracy or burdening inference costs. Experiments on out-of-distribution (OOD) datasets also demonstrate the robustness of their approach in the open-set environment.
Strengths: The idea of using multiple subnetworks to the ensemble for better calibration performance is novel, and the procedure does not need to first train the dense network. The authors also give a theoretical analysis of the method. The authors conduct extensive experiments to validate the proposed method.
Weaknesses: For the out-of-distribution comparison, the authors miss some important methods to compare such as [1]. All the experiments are conducted on ResNet architectures, could the authors provide results for some other architectures? And could the proposed method be conducted with filter-level subnetworks?
[1] Zhang, Dinghuai, et al. "Can subnetwork structure be the key to out-of-distribution generalization?." International Conference on Machine Learning. PMLR, 2021.
[2] Zhou, Xiao, et al. "Sparse invariant risk minimization." International Conference on Machine Learning. PMLR, 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to Weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the valuable comments/suggestions. We summarize our responses as follows.
**Q1: Comparison with OOD Works [1, 2].**
Thank you for pointing out these important references, which we would like to include and discuss in the related works section of the revised paper. Here, we would like to highlight some key differences from our work.
- First, the training procedure of [1] also follows the dense pretraining paradigm, where dense network pretraining is performed followed by the pruning and finetuning of the pruned network. This process is similar to the LTH work, which we have already included as a comparison baseline. As we can see from Table 2, dense pretraining techniques tend to perform poorly on OOD data leading to a lower ECE score. The underlying reason for the poor calibration on OOD data is associated with the dense network pretraining and is also explained by [2]. Through dense network training, the model may have already picked the spurious and noisy features while paying less attention to the important ones. As such, during the pruning phase, the subnetwork is searched by giving preference to the spurious and noisy features, leading to a suboptimal calibration performance. Compared to this, our proposed DRE does not rely on the pretrained dense network. The chance of finding sparse subnetworks that capture important features is also enhanced thanks to the DRO based training process.
- In the case of [2], the authors propose Sparse Invariant Risk Minimization (SIRM) that applies sparse techniques on the top of IRM to extract the invariant features in the desired sparse subnetwork. This paper demonstrates that IRM alone is not enough to avoid spurious features and therefore sparsity is required during the training process to learn from the invariant features. As such, the resulting subnetwork will have an enhanced generalization capability. Notably, the framework in [2] can not be compared directly with our DRE technique as IRM training requires datasets from multiple environments. Meanwhile, SIRM is a generic technique that can be integrated with any desired sparse training strategies. Therefore, SIRM is complementary and can be used together with our DRE framework.
**Q2: All the experiments are conducted on ResNet architectures, could the authors provide results for some other architectures?**
Thank you for raising this great point! In fact, we have conducted an ablation study that includes investigates the impact of different architectures, and the results are summarized in Table 9 of Appendix D.8. Using a vision transformer (i.e., ViT), DRE still achieves a much lower calibration error than EP. However, using ViT as a backbone, the accuracy from both EP and DRE is lower and ECE is higher than other backbones. It has been shown by existing works that without pretraining, the lack of useful inductive biases for ViT can cause a performance drop. Since no pretraining is conducted in both EP and DRE, it causes a lower accuracy (and a higher ECE).
**Q3: Proposed method on filter-level subnetworks.**
Thank you for this thoughtful comment! In our experimentation, backbones contain convolution layers and we have explicitly enforced sparsity in each convolution module. If we understand this question correctly, we believe filter-level subnetworks indicate the sparse subnetworks and if so, we have already considered filter-level sparsity in our framework. One of the closest works that analyze the implicit sparsity in the filter level is [3]. In this paper, the authors empirically show the sparsification of convolution neural networks mainly with respect to (a) Regularizer (weight decay and $L_2$), (b) Optimizer (SGD, Adam, Adadelta, and Adagrad), and (c) Difficulty of the task. They found that the adaptive optimizers (e.g., Adam, Adadelta, and Adagrad) learn sparser network representation compared to SGD. Also, no sparsity is observed in the absence of regularizers like $L_2$ and weight decay. Also, the sparsity depends on the interplay between the regularizer and optimizer. For example, $L_2$ shows a higher sparsity for comparable performance than weight decay when using the Adam optimizer, whereas using SGD the difference is not significant using weight decay and $L_2$. Furthermore, they showed that it is difficult to obtain a sparser network for difficult tasks whereas higher sparsity can be more easily obtained using less difficult tasks (e.g., Cifar10 generates higher sparsity compared to Cifar100). However, the analysis does not seem to align with our goal, which aims to improve the calibration ability of sparse networks whereas, [3] primarily focuses on analyzing the implicit sparsity achieved by considering different factors. We have conducted an experimentation considering different optimizers and weight decay coefficient in the Cifar10 dataset with ResNet50 architecture As shown in Table 4 (a) (in **the attached pdf file of the general rebuttal**), simply using the different optimizers with varied weight decay coefficients does not help to improve the calibration.
**References**
- [1] Zhang, Dinghuai, et al. "Can subnetwork structure be the key to out-of-distribution generalization?." International Conference on Machine Learning. PMLR, 2021.
- [2] Zhou, Xiao, et al. "Sparse invariant risk minimization." International Conference on Machine Learning. PMLR, 2022.
- [3] Mehta et al. "On Implicit Filter Level Sparsity in Convolutional Neural Networks". CVPR2019.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttals. I will keep my score.
---
Rebuttal 2:
Comment: Dear Reviewer xGVL,
Thank you for providing your reviews and valuable suggestions!
By addressing raised concerns, we believe that the paper has been improved and we thank the reviewer for that. The references related to out-of-distribution works will help us further strengthen our motivation and would be helpful to justify choosing the sparse sub-network technique instead of the dense network in our framework. Also, a comparison with filter-level subnetworks in our answer to Q3 becomes helpful to further demonstrate that a simple combination of regularization techniques (such as weight decay) and optimizers is inadequate to improve the calibration performance.
We hope that the reviewer finds our answers satisfactory and considers updating the assessment accordingly! We will be more than happy to provide any additional clarifications if needed. | Summary: The authors propose a method of sparse training of deep neural networks with the objective of confidence calibration. The method is based on learning an ensemble of sparse models where each model begins with the same training dataset, and is increasingly diversified such that each model in the ensemble is trained on different sets of hard/rare data samples. The authors walk through a theoretical argument on why they believe this helps to improve confidence calibration based on prior work on understanding spurious correlations and training. Results are demonstrated on Res/Nets with CIFAR-10, CIFAR-100 and Tiny ImageNet for ensemble models of 91% and 85% sparsity (considered over all models in ensemble). Results are compared with sparse training baselines, such as L1 pruning, CigL, Sup-Ticket and DST Ensemble, along with Sparse Network Ensemble and finally a single dense model.
Strengths: - The authors focus on confidence calibration, and evaluate their results not just in a typical classification context, but also in out-of-distribution and open-set distributions where the effect of confidence calibration is well-motivated and clear.
- Results are on reasonable models and datasets, with mostly appropriate baselines (at least on the sparsity side)
- Results suggest compared to dense and most sparse baselines the proposed method improves confidence calibration significantly.
- The proposed "Robust loss" which focuses on ensembles learning from a diverse set of difficult examples found during training is intuitive.
- Reasonable presentation overall, although could be cleaned up and made easier to understand.
Weaknesses: - I found the motivation of using sparse ensembles (v.s. a dense ensemble) lacking, with no clear reason why the proposed method would not work for a dense ensemble also, and notably the lack of a dense ensemble as a baseline. If the motivation is improved training performance, there is no description or qualification of the improved performance, with a comparison of the dense baseline to understand any potential tradeoffs, if the motivation is improved calibration compared to dense ensemble, then similarly this should be also be explicitly shown.
- No ablation analysis of different effects of various aspects of the proposed method, i.e. sparse ensemble v.s. dense ensemble v.s. proposed robust loss, in particular sparse ensemble v.s. robust loss being the most important.
- I found the "Theoretical Analysis" to be quite far from theoretical, and more motivation for the proposed method than a proof of any sort. Indeed considering how poorly spurious correlations are understood in the field, I believe the author's apparent claims that (a) the over-confidence of neural networks is due to spurious correlations alone, and (b) the approach proposed by the authors addresses spurious correlations can only be an over-claim. While this section is fine as a motivation for the method, this cannot be claimed to be a proof or formal demonstration that the method reduces spurious correlations in general.
- What appears to be one of the most important/strongest baselines (DST Ensembles) is missing from all but the CIFAR-10 and CIFAR-100 classification results.
- For TinyImageNet a sparse ensemble without the training data curriculum appears to have a relatively similar effect at the highest sparsity level as the proposed method on improving confidence calibration drastically, again causing me to wonder about ablation analysis of the method.
Minor issues:
- Writing is fairly verbose and in some cases a bit repetitive, could be made more easy to understand/compact.
- Confusing usage of "density" rather than "sparsity" unlike most of the literature in the field.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - What is the motivation of using sparse v.s. dense ensembles in the proposed method?
- What is the effect of the different aspects of the proposed method? i.e. robust loss, sparse v.s. dense training, and ensembles.
- Why are dense ensembles not a baseline?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors did not AFAIK address any such limitations, although improving confidence calibration of neural networks would have positive societal impact overall I believe.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the valuable comments/suggestions. We summarize our responses as follows.
**Q1: Motivation of using sparse ensemble.**
Please refer to our answer to Q1 in the general response. In addition to a better calibration performance, sparse ensembles also achieve a significant computational advantage when compared with dense ensembles. Specifically, using DRE we only use 15\% of the weights present in a single dense network leading to a significant reduction in the computational cost. As shown in Table 10 of the paper, using a dense ensemble with 3 dense networks, we will have a total of 12.42 ($\times 10^{9}$) FLOPs whereas DRE only uses 1.31 ($\times 10^{9}$).
**Q2: Ablation study on sparse ensemble vs deep ensemble.**
Thank you for this suggestion. In our response to Q1, we have compared the proposed sparse ensemble DRE with (1) a standard dense ensemble with each dense network trained using the ERM loss and (2) a dense ensemble trained using the proposed robust loss. The results ((Table 1 in attached file)) confirm that DRE is able to achieve a much better calibration performance than dense ensembles. We explain the reason for the performance difference in our answer to Q1 as well.
**Q3: Importance of theoretical Analysis.**
Please refer to our answer to Q2 in the general response. To add more empirical evidence that supports our theoretical contribution, we have conducted additional experiments on the WaterBird dataset that contains explicit spurious correlations [7]. As shown in Table 4 (b) (in **the attached pdf file of the general rebuttal**), the proposed DRE greatly improves the calibration performance compared to the sparse network ensemble (SNE) in testing samples, where there is no spurious correlations (last two columns) between the background and the bird in the image. These are the samples with waterbirds in a land background or landbirds in a water background. The result justifies that our model indeed learns from important features instead of spurious correlations. In contrast, in the case of the testing samples holding spurious correlations, i.e., waterbird in a water background and landbird in a land background (Spurious columns in Table 4 (b)), DRE and SNE achieve a comparable performance. This is because overconfident predictions are preferred on these testing samples as they are most likely to be correct benefiting from the spurious correlation.
**Q4. DST Ensemble Baseline.**
Thank you for the comment. We would like to clarify that we have included the comparison with the DST ensemble for Cifar10, Cifar100, as well as TinyImageNet. For the classification task, the
DST ensemble result for TinyImageNet is reported in Table 6 of the Appendix. Furthermore, in the case of the out-of-distribution experiments on Cifar10 and Cifar100, the reason for not including the DST ensemble method is provided in lines 342-344. Specifically, in the case of the Cifar100-C, the accuracy using DST is more than **11\%** lower than that of the DRE in both architectures whereas in the case of Cifar10-C, the accuracy using DST is more than **6\%** lower than that of the DRE using both architectures.
**Q5. For TinyImageNet a sparse ensemble without the training data curriculum appears to have a relatively similar effect at the highest sparsity level as the proposed method on improving confidence calibration drastically...**
Thank you for pointing out this important observation. First of all, this provides another important empirical evidence that, compared to the dense networks, sparse models and the ensemble thereof (i.e., SNE) help to improve calibration. Second, while SNE achieves significant improvement as compared with other baselines, the proposed DRE further outperforms SNE to a large extent in most cases. For example, on the WideResNet101 backbone, the ECE score of SNE is still two times of DRE; for the sparsity level at 15\%, the ECE scores of SNE are 5-6 larger than those of DRE. The results are consistent over all datasets on all types of backbones.
**Q6. The authors did not AFAIK address any such limitations...**
We apologize for the confusion regarding limitations. In Appendix E.2, we have discussed the limitations and potential future extensions of our work.
---
Rebuttal Comment 1.1:
Comment: First, I’d like to thank the authors for their rebuttal comments.
> Q1: Motivation of using sparse ensemble.
I’d like to clarify that I’m a fan of sparsity in general, but for any method/paper it is important to motivate the work. I believe the author’s comments here do a *much* better job at doing this, but I want to clarify if the authors have revised the paper to motivate sparse ensembles?
> Q2: Ablation study on sparse ensemble vs deep ensemble.
> Thank you for this suggestion. In our response to Q1, we have compared the proposed sparse ensemble DRE with (1) a standard dense ensemble with each dense network trained using the ERM loss and (2) a dense ensemble trained using the proposed robust loss. The results ((Table 1 in attached file)) confirm that DRE is able to achieve a much better calibration performance than dense ensembles. We explain the reason for the performance difference in our answer to Q1 as well.
Thanks for the extra experimental results/requested baseline. I think these are important results in motivating your analysis of, and proposed method in, improving the calibration using sparse ensembles, and it strengthens your results considerably.
> Q3: Importance of theoretical Analysis.
> Please refer to our answer to Q2 in the general response. To add more empirical evidence that supports our theoretical contribution…
Unfortunately this rebuttal answer misses my point entirely, while simultaneously emphasizing it by discussing empirical results only: *there is no theoretical analysis present in your paper*. I’m a big fan of using empirical results to support a hypothesis, and I’d personally be happy if that’s what your paper claimed vis-a-vis spurious examples and your method, but that is not what your paper claims to be doing, or you claim in this rebuttal. Instead you appear to be continuing to claim you have a theoretical analysis, and this is simply not true. It is not acceptable to publish a paper claiming to have a theoretical analysis/support for a claim when it does not.
> Q5. For TinyImageNet a sparse ensemble without the training data curriculum appears to have a relatively similar effect at the highest sparsity level as the proposed method on improving confidence calibration drastically...
> ….. The results are consistent over all datasets on all types of backbones.
My point here is that this result isn’t consistent with this story/the other results, so I think it would be good to explain it, especially since this model/dataset doesn’t seem to benefit much from the proposed method.
*Summary*
While I found some of the author’s rebuttal comments and the extra experiments could address many of my issues with the work, especially in lack of motivation and baselines, the author’s continued insistence that they have a theoretical analysis supporting their method and hypothesis as to the mechanism behind that method — when they simply don’t — is not acceptable. Perhaps more frustratingly, I see little need for the over-claims or anything more than empirical analysis to support their method, and would advise the authors to revaluate how they present this work.
---
Reply to Comment 1.1.1:
Comment: Many thanks for carefully checking our rebuttal and providing additional comments and valuable follow-up questions. We appreciate the confirmation from the reviewer that our rebuttal along with the extra experiments could address many concerns raised in the original review, especially in lack of motivation and baselines. In what follows, we provide our response to each of the follow-up questions.
**...if the authors have revised the paper to motivate sparse ensembles?**
We would like to mention that during the rebuttal phase we are not allowed to make changes in the paper. We will incorporate all the changes that we made during the rebuttal phase in the revised paper.
**Theoretical analysis**
Regarding the theoretical analysis, sorry for misinterpreting your comment in our first response and thank you for your further clarification! We would like to acknowledge that our analysis in Section 3.3 is primarily from the spurious correlation perspective so it is by no means a complete and thorough theoretical analysis to fully understand the over-confidence issue present in the deep neural networks. Our goal is trying to offer some deeper insight on why the proposed approach may work so that the good performance could be potentially generalized to a wider range of datasets and network architectures. The additional experiments conducted in our rebuttal offers more concrete evidence to support the conclusions made in that section. To this end, we formalize the problem setup by clearly stating the assumptions and conduct some formal mathematical derivations to arrive at our conclusions. It is worth noting that our analysis also builds upon some recent theoretical advances (such as Zeyuan Allen-Zhu and Yuanzhi Li. Towards understanding ensemble, knowledge distillation and self-distillation in deep learning, ICLR 2023) but makes novel adaptions to our unique problem setting.
We would appreciate if the reviewer could provide some additional guidance on how to improve this part. If our analysis lacks rigorousness that does not meet the standard of a typical theoretical analysis, we could tone down any statements on the theoretical contribution.
**Result on TinyImageNet**
Regarding the TinyImageNet result, thank you again for further clarifying the question. This is a great point! We believe the reason for the smaller gap between SNE and DRE in the case of TinyImageNet may be related to the overfitting phenomenon. In the case of TinyImageNet, because of the difficulty of the dataset, we would need a relatively larger network architecture compared to the Cifar10 and Cifar100 in order to capture the complex patterns in the data. As such, sparse subnetworks constructed from small to mid sized networks are less likely to overfit (or even underfit). Therefore, the SNE ensemble model will also be relatively less overfitted compared to Cifar10 and Cifar100. It is also interesting to observe that as we move to a larger backbone (i.e., from ResNet101 to WideResNet101), the gap between SNE and DRE increases (along with the chance of overfitting). Furthermore, with a higher density at 15\%, the gap between SNE and DRE becomes even larger, which is consistent with our analysis above.
Thanks again for your feedback and we are happy to answer any follow-up questions. | Rebuttal 1:
Rebuttal: First of all, we would like to thank all the reviewers for spending time to review our paper and providing many constructive suggestions and comments. Here, we summarize our responses to some major questions raised by reviewers:
**Q1: Motivation of using sparse ensemble (Reviewer TYXu)**
Besides a high computational cost, one key motivation for developing sparse ensembles is the poor calibration of dense networks resulting from the memorization effect introduced by an over parameterized architecture as mentioned at the beginning of the introduction section. Such phenomenon has been commonly observed and discussed in recent literature (e.g., refs [9, 24] in the paper along with some other references such as [1] and [2] suggested by other reviewers). Our empirical study has further confirmed this across different datasets and backbone architectures. For example, ref [24] theoretically justifies that overparameterization in dense networks exacerbates spurious correlation leading to poor calibration; Refs [1] and [2] (see references for details) further demonstrate that dense networks are more likely to overfit leading to poor generalization (and calibration). To avoid overfitting, [1] and [2] also resort to training sparse networks. Our empirical evaluation shows that sparse network training can consistently improve the calibration performance over multiple datasets in multiple architectures. As can be seen in Tables 1-3 of the paper, sparse networks achieve better ECE scores than their dense counterparts in most settings.
To more clearly demonstrate the advantage of using the proposed DRE compared to a dense ensemble, we have conducted additional experimentation and present the results in Tables 1 presented in **the attached pdf file**. The Dense Ensemble (w/o DRO) refers to the one where we ensemble multiple dense networks, where each one is trained using the standard ERM loss. The Dense Ensemble (w/ DRO) is the one where we train multiple dense networks but using the DRO loss (i.e., Eq. 1). As can be seen, the proposed sparse ensemble (i.e., DRE) clearly outperforms the dense ensembles to a large extent. It is also interesting to note that a dense ensemble (with DRO) only achieves a slightly better ECE score as compared with a dense ensemble (w/o DRO). This is because, it is more difficult to further diversify dense networks with the exactly same architectures (i.e., nodes and connections). In contrast, using sparse training, we can naturally pick very distinct sparse subnetworks from the original dense network to increase the diversity, where each subnetwork is already better calibrated because of the reason explained above. Additionally, thanks to the distributionally robust ensemble, we can further diversify the learned subnetworks leading to much better calibration performance.
**Q2: Importance of theoretical Analysis. (Reviewer TYXu)**
We agree with the reviewer that our theoretical analysis can further strengthen our motivation. We also acknowledge that spurious correlation is only one potential source that can lead to over-confidence in neural networks (but we did not claim that *over-confidence of neural networks is due to spurious correlations alone*). Our analysis primarily focuses on this important perspective, which identifies concrete theoretical evidence on this important connection. The theoretical findings have been further confirmed in our empirical evaluations on multiple datasets over a diverse set of backbone architectures. Therefore, we believe our work is an important step forward in addressing this highly important issue by providing a theoretically sound and empirically feasible solution to effectively lower the model’s false confidence on its wrong predictions resulting from spurious correlations, as evidenced by the improved overall calibration performance. The analysis of other sources that contribute to overconfident predictions remains as an important topic for future research.
**Q3: Authors completely ignore very popular (static/dynamic) sparse training methods. (Reviewer RrYi)**
Thank you for pointing out these sparse training methods without the need for iterative pruning/growing. To most clearly differentiate these works from our main technical contribution, we would like to re-state our primary focus, which is to **achieve calibrated sparse network training**, as indicated by the title of our paper. To this end, we propose a novel Distributionally Robust Optimization (DRO)
framework to achieve an ensemble of lottery tickets towards calibrated network sparsification (see lines 8-10 of the abstract). Like the reviewer, we clearly recognize the existence of sparse training techniques that do not rely on pre-training and iterative pruning (see lines 33-34). We use Edge Popup (EP) as a representative of this group of methods (and the plural implies that EP is only one of such methods). However, like EP, all these methods focus on pushing the accuracy up to the original dense networks and hence still suffer from a severely overfitting behavior, leading to a poor calibration performance. This is the *novel technical gap* that we identify and aim to address using our proposed Distributionally Robust Ensemble (DRE). We have empirically shown the poor calibration performance from EP (along with some other representative sparse training models, including [3] as mentioned by the reviewer) in both Figure 1 and our experimental results in Tables 1-3 in the main paper and Tables 5-8 in the Appendix.
**References**
- [1] Zhang, Dinghuai, et al. "Can subnetwork structure be the key to out-of-distribution generalization?." International Conference on Machine Learning, 2021.
- [2] Zhou, Xiao, et al. "Sparse invariant risk minimization." International Conference on Machine Learning, 2022.
- [3] Evci, U., et al. Rigging the lottery: Making all tickets winners. In International Conference on Machine Learning, 2020.
Pdf: /pdf/3d6fb8fd71fd7c05bb30040024ac9fdd660f3859.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
MotionGPT: Human Motion as a Foreign Language | Accept (poster) | Summary: This paper proposes MotionGPT, an approach that unifies language modeling with human motion modeling by treating each new motion token the same as a language token. To achieve this, a motion tokenizer based on VQ-VAE is first learned. Then, a pretrained language model is fine-tuned to learn from a unified vocabulary of motion and language and then instruction-finetuned to perform tasks such as text-to-motion, motion completion, and motion captioning. Experiments show that the proposed model achieves state-of-the-art performance for the proposed tasks.
Strengths: - The idea of treating motion as discrete tokens for language models is novel and sound, and can enable a large number of possible applications. The showcased tasks are also comprehensive and demonstrate the flexibility of the proposed model.
- The provided quantitative and qualitative results show state-of-the-art performance and generate natural and good-looking human motion. The multi-task ability is impressive and shows that the model can handle multiple different tasks and natural language instructions.
- Extensive ablation is provided.
Weaknesses: - I do find the lack of failure analysis a bit concerning. It is difficult to gauge how well the model learns the relationship between motion and language.
- Similarly, I find the evaluated motion in the provided video and supplement to be relatively simple and clear instructions, which do not really conform to the objective of "generalizing effectively to unseen tasks or data". For instance, in the text-to-text demonstration, the word "praying" is showcased, but I wonder how well the motion generator part can handle words that have semantic meaning but could be unseen.
Minor: Error T2M-GPT and MDM’s citation are mixed in Table 1.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Do TM2T, T2M, and poseGPT capture all human motion in their training dataset's discrete latent code? How is the reconstruction loss on ALL the training data?
---
After rebuttal, my main concerns about failure analysis and text complexity are addressed and I would like to maintain a positive rating of this work.
---
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: More failure cases should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your approval of our idea, human motion as a foreign language, as well as what it could enable on applications. We will fix the mixed citations, add more failure cases, and analyze the zero-shot ability of MotionGPT in the paper.
📝 **Q: Failure analysis. Zero-shot ability on handling words that have semantic meaning but could be unseen.**
💡 **A:** As shown in Fig. 12, we provide both zero-shot cases and failure cases. Benefitting from strong language models, MotionGPTs can understand unseen works in the text-to-motion training set, like "scuttling" and "barriers", and generate correct motions based on the meaning of sentences. However, it still struggles to generate unseen motions, like gymnastics, even if MotionGPTs understand the text inputs.
📝 **Q: How well MotionGPT learns the relationship between motion and language?**
💡 **A:** Unlike the previous motion generators using the text encoder of CLIP for conditions, please note that MotionGPTs leverage language models to learn the motion-language relationship, instead of relying on text features from CLIP. According to our zero-shot results (cf. Fig. 12) and performances on multi-tasks (cf. Fig. 10), MotionGPTs establish robust connections between simple/complex texts and simple motions in evaluations, but they fall short when it comes to complex-text to complex motion translation.
📝 **Q: Do TM2T, T2M, and poseGPT capture all human motion in their training dataset's discrete latent code?**
| Method | MPJPE$\downarrow$ | MPJPE $\downarrow$ | ACCL $\downarrow$ | FID $\downarrow$ | DIV $\rightarrow$ |
|---|---|---|---|---|---|
| VPoser-t | 75.6 | 48.6 | 9.3 | 1.430 | 8.336 |
| ACTOR | 65.3 | 41.0 | **7.0** | 0.341 | **9.569** |
| MLD-1 | **54.4** | 41.6 | 8.3 | 0.247 | 9.630 |
| MotionGPT (Ours) | 55.8 | **40.1** | 7.5 | **0.067** | 9.675 |
**Motion reconstruciton comparision.**
| Method | FID $\downarrow$ |
|-----------|-------------------|
| MotionGPT (Ours) | $0.510^{\pm.016}$ |
| T2M-GPT | $0.514^{\pm.029}$ |
| MLD | $\boldsymbol{0.404}^{\pm.027}$ |
**Comparison of FID in text-to-motion task on KIT-ML dataset.**
💡 **A:** Given sufficient training or testing data from the same dataset, motion reconstruction is not a challenging task for both VAE and VQ-VAE. We have provided the evaluation on motion reconstruction in Tab.8. However, when dealing with a limited amount of motion data, like the KIT dataset, the VAE model shows better ability in motion interpolation, surpassing VQ-VAE.
A relevant evaluation is shown above (also in Tab.7), where MLD (VAE) outperforms MotionGPT and T2M-GPT (VQ-VAEs) on FID.
The real challenge lies in reconstructing complex motions, such as diving or gymnastics sports. Existing motion generators struggle to accurately reconstruct complex motions using a codebook extracted from daily motion datasets. Collecting these complex yet valuable motions is still a significant challenge to the motion research community.
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: I thank the authors for the detailed response.
I find most of my concerns resolved and would maintain a positive score. | Summary: In view of the idea that motion could be perceived as a form of body language, the authors propose to fuse motion and language to perform a unified motion-language pre-training.
In detail, motion is quantized into discrete tokens in the same form as natural languages.
Then, language modelling is performed on both motion and text in a unified manner.
Furthermore, prompt tuning is adopted to fine-tune the pre-trained model.
Experiments demonstrate the impressive performance on multiple motion tasks.
Strengths: The idea of unifying motion and language into tokens for uniform pre-training is interesting and novel.
The uniform motion-language model manages to provide a solution for a wide range of tasks.
The provided demo is rather impressive and convincing.
Extensive ablation studies provide a detailed analysis on the effectiveness of different design choices.
Weaknesses: Though the quantized representation provides the ability to unify motion and text, it also imposes constraints on the motion representation due to the sequence-level encoding. In other words, when the operation granularity is smaller than the down-sample rate, I'm not sure whether the method could provide satisfying performance. For example, motion in-between seems to be designed as a token-level in-between. If only given a start frame and an end frame as the in-between input (which could be a more practical application scenario), would the model perform well?
Both HumanML3D and KIT are limited in the vocabulary size and the overall dataset size compared to language datasets as I know. Therefore, I understand the limited performance on KIT and when increasing the scale of the model. While in view of the recent success of LLMs, I think the authors should pay attention to unifying current available datasets to exploit the scalable potential of language models when processing large scale data besides increasing model size.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: I'm interested in the vocabulary that VQ-VAE learned. Is it possible to visualize some of the tokens? Or directly generate description on each single token?
How is the down-sample rate chosen? It is a fundamental hyper-parameter that decides the overall granularity of the model.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors provide their discussion on the limitation of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 📝 **Q: Motion Down-sample, if only given a start frame and an end frame as the in-between input, would the model perform well?**
💡 **A:** VQ-based methods, such as MotionGPT and T2M-GPT, employ downsampling tricky to enhance the density of the codebook or tokens and reduce computing costs. This indeed becomes a constraint when the operation granularity is smaller than the down-sample rate. However, to address this issue, only the start and end frames are provided as in-between inputs. Some technical tricks can be used, such as repeating a single start or end frame up to the window size as inputs and removing the redundant parts in outputs. This does not significantly impact the effectiveness of the model, as there are often static beginnings or endings in the GT motion data.
📝 **Q: While in view of the recent success of LLMs, the authors should pay attention to unifying current available datasets to exploit the scalable potential of language models when processing large-scale data besides increasing model size.**
💡 **A:** We appreciate your insight and totally agree with this suggestion. We have faced this limited dataset issue while implementing MotionGPT and in our further research. It is a hard but valuable work to unify and collect a larger motion dataset. Foruthertaly, some researchers are working on this problem, as seen in recent work like Motion-X and other datasets, which hold promise for advancing large-scale motion models. We intend to further evaluate MotionGPT on these larger datasets once they become available.
📝 **Q: Visualize some of the tokens in the vocabulary that VQ-VAE learned.**
💡 **A:** As shown in Fig.13, we visualize these motion tokens in motion vocabulary $V_m$ and their corresponding localized spatial-temporal contexts, depicted within 4-frame motion segments. However, MotionGPT falls short in generating descriptions for each individual token, as the training is conducted on token sequences.
📝 **Q: How is the down-sample rate chosen? It is a fundamental hyper-parameter that decides the overall granularity of the model.**
| Downsampling | MPJPE $\downarrow$ | MPJPE $\downarrow$ | ACCL $\downarrow$ | FID $\downarrow$ | DIV $\rightarrow$ |
|---|---|---|---|---|---|
| $l=1$ | 76.2 | 49.5 | 19.5 | 0.421 | 9.613 |
| $l=2$ | **52.6** | **37.7** | **9.5** | 0.135 | 9.722 |
| $l=4$ | 55.8 | 40.1 | 7.5 | **0.067** | 9.675 |
| $l=8$ | 62.7 | 45.3 | 8.7 | 0.223 | **9.584** |
💡 **A:** We selected the down-sample rate based on the frames-per-second (FPS) of the HumanML3D and KIT-ML datasets, which is 20 fps. Therefore, down-sampling by a factor of 4 to achieve 5 fps can ensure distinctiveness in motion frames, and prevents redundancy, and acceleration training. This choice was also made to ensure a fair comparison, as we utilized the same down-sample rate as T2M-GPT. As shown in the above table, we provide an ablation study on these parameters, where a factor of 4 achieves the best FID in motion reconstructions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the helpful responses. My major concerns are addressed, | Summary: This paper presents a motion-language model via a shared vocabulary, where the texts are represented by original tokens, and the motions are encoded by a trained discrete tokenizer. Based on a pre-trained encoder-decoder framework, i.e., T5, the authors fine-tune the T5 with masked modeling on motion-language paired data. Finally, the obtained model is finetuned with specific instructions for the target job (text-to-motion, motion-to-text, motion prediction, motion in-between in the paper). The experiments demonstrate the MotionGPT does such tasks well.
Strengths: - this paper has a clear and interesting motivation, i.e., jointly learn motion-language on a token-to-token model, enabling the trained model to be aligned with the text instructions.
- the main paper, along with the supp., provides solid and comprehensive experimental results.
Weaknesses: - It seems the work is finished in a rush. For example, the instruction tuning is performed on each task independently, acting like a simple finetuning (the experiments also give some pieces of evidence, where it depends on the task-specific tuning). In this way, is it just a pretrain-finetune scheme? It is interesting to see any difference with the previous pretrain-finetune pipeline, such as enabling more abilities like reasoning, and zero/few-shot learning.
- The motion prediction lacks comparison with border models like T2M-GPT, since the auto-regressive models are better at prediction.
- Although the authors use a strong pre-trained language model, and finetune it on each task independently, it shows a limited performance gain compared with previous works.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Why do you choose T5 as the base model? which is an encoder-decoder architecture. Have you tried a decoder-only model like LLaMA since it is a more straightforward solution for instruction tuning?
- How do you implement the MDM on the motion prediction and in-between tasks? the numbers of MDM exist a large gap with text-to-motion. I mean, for diffusion models, there are lots of details in implementation. For example, reset the prefix 20% motion at each diffusion step. Lacking the details makes the numbers not fully convincing.
- How do you merge the text vocab and motion vocab in detail? concatenating them together?
- For tuning on each task, do you tune the entire model or just part of it?
Table 2 is redundant since all the information is duplicated in the latter tables.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your approval and insightful comments. We will address your concerns in the following comments, re-organize this redundant Tab. 2, and update our paper accordingly.
📝 **Q: Instruction tuning, Reasoning, and Zero-shot learning**
💡 **A:** We propose instruction tuning to train a single MotionGPT across all motion-related tasks, while task-specific tuning is to train and evaluate MotionGPTs on a single task. We employ these two training schemes to study the ability of MotionGPT across multi-tasks. As shown in Fig. 12, we provide zero-shot cases. Benefitting from strong language models, MotionGPTs can understand unseen works in the text-to-motion training set, like "scuttling" and "barriers", and generate correct motions based on the meaning of sentences. However, it still struggles to generate unseen motions, like gymnastics, even if MotionGPTs understand the text inputs. Moreover, this reasoning provides inspired insight for our future research. We will explore this direction and provide more detailed zero-shot learning evaluations.
📝 **Q: Comparison of motion prediction with T2M-GPT**:
|Method | FID $\downarrow$ |Diversity $\rightarrow$ | ADE $\downarrow$ | FDE $\downarrow$|
|:--:|:--:|:--:|:--:|:--:|
|Real |0.002 |9.503 | - | - |
|MDM |6.031 |7.813 |5.446 |8.561 |
|T2M-GPT | 2.056 | 8.635|6.161| 8.302 |
|MotionGPT (Ours) | **0.905** | **8.972** | **4.745** | **6.040** |
💡 **A:** We have added T2M-GPT to this comparison of motion prediction in Tab. 5. As shown in the above table, all three methods, MDM, T2M-GPT, and MotionGPTs, follow the same setting with the first 20% and generate the remaining, as mentioned in the implementation details, appendix B.5. Benefitting from larger model size, MotionGPT outperforms two other methods in this task.
📝 **Q: Limited performance gain with strong language models.**
💡 **A:** We thought MotionGPT, using a significantly larger language model, would surpass all existing methods in all tasks. However, the evaluation shows MotionGPT achieves SOTA results in 18 out of 23 metrics, where many improvements are only small gains. This can be attributed to the limited size of the dataset. As mentioned in R3, both HumanML3D (14,616 motions) and KIT (3,911 motions) are limited in vocabulary size and overall dataset size, particularly when compared to billion-level language datasets, which affects the efficacy of large-scale models. Benefitting from recent dataset works, like Motion-X, we will evaluate the performance gain of MotionGPT in larger datasets once they become available.
📝 **Q: Why choose T5 as the base model? an encoder-decoder architecture. Have you tried a decoder-only model like LLaMA?**
💡 **A:** The first language model that we used to build MotionGPTs is LLaMA-13B. However, it shows insufficient performance and low training efficiency. We assume the reason is the limited dataset size compared to the large parameters and language data of LLaMA. **As shown in Tab. 15**, we tried a smaller size decoder-only backbone GPT2-Medium and provide the results. Then, we thus choose T5-770M, a small but common language model, as our final backbone, because many previous vision-language multimodal works, like Unified-IO and BLIP, have chosen T5, this encoder-decoder architecture. It shows a strong power to address multi-modal tasks. In addition, the decoder-only model has the advantage for self-supervised without pair data while we have paired data which this advance is greatly weakened. We are still working on collecting a large motion dataset for larger motion-language models.
📝 **Q: How do you implement the MDM on the motion prediction and in-between tasks?**
💡 **A:** Thank you for your inquiry. We follow the approach outlined in Appendix B.4 and Line-296 of our paper, where we highlight that MDM achieves the motion in-between task using a masked motion "in-painting" technique. Specifically, this involves fixing the initial and final portions of the motion and allowing the model to generate the central portion. To adapt this concept for motion prediction, we similarly fix a portion of the motion – in our case, the first 20% – and generate the subsequent sequence.
📝 **Q: How do you merge the text vocab and motion vocab in detail? concatenating them together?**
💡 **A:** To ensure a shared distribution between language and motion, we initialize the Motion tokens separately and concatenate them alongside the language tokens. This step ensures a balanced representation that encompasses both modalities. Besides the token embeddings are actively trained during the entirety of stages 2 and 3, ensuring a comprehensive fusion of language and motion knowledge. We will also elaborate on this concatenation in the final version.
📝 **Q: For tuning on each task, do you tune the entire model or just part of it?**
💡 **A:** To address individual tasks, we adopt a focused approach where the entire model is fine-tuned. Our rationale lies in the fact that, for each specific task, our emphasis is on optimizing task-specific performance, without retaining an excessive amount of intelligence learned from other tasks. Besides, we only exclusively fine-tune the Text-to-Motion task, while other tasks are reported without specific tuning.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses. I stick to the positive recommendation. | Summary: This paper introduces a motion generation pipeline called MotionGPT, which is based on GPT. MotionGPT utilizes VQ-VAE to discretize human poses into tokens and combines them with language tokens to create a unified codebook. The model is initially pre-trained on motion language data and subsequently fine-tuned on prompt-based tasks to enable it to perform various motion-language tasks.
Strengths: 1. This is the first work that explores the application of Large Language Models (LLMs) in the field of text-driven motion generation. The proposed prompt finetuning method further extends the scope of applications by including 10 different tasks. These methods provide inspiration for future research in this area.
2. The performance in the Motion-to-Text task shows a significant improvement on Bleu@4 and Cider compared to TM2T.
3. The paper is well-written and effectively conveys information in a clear and understandable manner.
Weaknesses: 1. MotionGPT exhibits poorer performance in the crucial text-to-motion task, with a significant gap in FID metrics compared to T2M-GPT. Particularly on the KIT-ML dataset, there is a considerable difference in R Precision compared to T2M-GPT, MLD, and MotionDiffuse.
2. The demo video provides limited comparisons with other examples. Apart from the "crouch down" example, the performance of MotionGPT is not noticeably superior to T2M-GPT in the provided examples. These examples do not sufficiently demonstrate an advantage in terms of generation quality.
3. The overall technical contribution is limited. The major distinction from T2M-GPT lies in the combination of motion modality and language modality for modeling, along with subsequent prompt finetuning. However, in the current version, they appear more like shared intermediate layers rather than truly integrated. For instance, a potential approach could involve language-based motion editing, where given a reference motion sequence and a desired text modification, the algorithm produces the edited result. Such task types would better illustrate the advantages of unified modeling. Additionally, considering the significant performance gap between MotionGPT and T2M-GPT in standard text-to-motion tasks, MotionGPT seems to sacrifice accuracy in exchange for additional functionalities. Such technical contribution does not meet the bar set by NeurIPS conference.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can MotionGPT perform motion editing or motion composition similar to MotionDiffuse and MDM?
2. The supplementary material states that there were only 15 users in the user study for Motion-to-Text. The number of testers for Text-to-Motion is not mentioned. However, 15 users may be considered an insufficient sample size for a reliable evaluation, especially for Motion-to-Text. It is recommended to have a larger sample size, preferably around 50 or more, to provide a more credible assessment.
3. What is the reason behind the significant difference in performance between the KIT-ML and HumanML3D datasets?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations have been well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 📝 **Q: Motion Quality and Performance Gain**
| Method | FID $\downarrow$ |
|:--|:--|
| MDM | $0.544^{\pm.044}$ |
| MotionGPT | $0.160^{\pm.008}$ |
| T2M-GPT | $\boldsymbol{0.116}^{\pm.004}$ |
Comparison of FID in text-to-motion task on HumanML3D dataset.
| Method | FID $\downarrow$ |
|:--|:--|
| T2M-GPT | $0.514^{\pm.029}$ |
| MotionGPT | $0.510^{\pm.016}$ |
| MDM | $\boldsymbol{0.497}^{\pm.021}$ |
Comparison of FID in text-to-motion task on KIT-ML dataset.
💡 **A:** The FID metrics primarily focuses on the motion quality rather than the correlation between motion and text. While MDM serves as a successful benchmark for motion generation, both MotionGPT and T2M-GPT outperform MDM by a margin of 0.38~0.43 on the FID scale. However, the difference in motion quality among these three works is not significant in video supply. Additionally, MDM outperforms two vector quantized methods, MotionGPT and T2M-GPT, in terms of FID on the KIT dataset. This can be attributed to the limited number of 3,911 motion sequences, which makes it challenging to construct a comprehensive motion codebook. More importantly, MotionGPT contributes to multiple motion tasks with LLM, particularly in generating both text and motion within a single model, rather than aiming to improve the FID metric.
📝 **Q: Performance Gain on R-Precision in KIT**
💡 **A:** The evaluation of R-Precision in the KIT dataset relies on the text encoder, which is built using a limited set of 6,353 textual descriptions. In contrast, MotionGPTs benefit from LLM and large language data, enabling them to generate longer and more nature language descriptions for motion. However, this leads to a discrepancy between the generated descriptions and the GT descriptions, resulting in a lower R-Precision.
📝 **Q: MotionGPT seems to sacrifice accuracy in exchange for additional functionalities.**
💡 **A:** As shown in Fig. 10, MotionGPT achieves SOTA on 18 out of 23 metrics across four motion-related tasks. Additionally, as mentioned by R3, both HumanML3D and KIT are limited in overall dataset size, particularly when compared to billion-level language datasets. This affects the efficacy of large-scale models. We will further employ a larger motion-text dataset to evaluate MotionGPT. Besides, MotionGPTs introduce motion-language pre-training, as well as its zero-shot ability, which is a promising direction worth exploring and could stimulate self-training procedures for further research.
📝 **Q: Can MotionGPT perform motion editing or motion composition similar to MotionDiffuse and MDM?**
|Method | FID $\downarrow$ |DIV $\rightarrow$ | ADE $\downarrow$ | FDE $\downarrow$|
|:--|:--|:--|:--|:--|
|Real |0.002 |9.503 | - | - |
|MDM |6.031 |7.813 |5.446 |8.561 |
|T2M-GPT | 2.056 | 8.635|6.161| 8.302 |
|**MotionGPT (Ours)** | **0.905** | **8.972** | **4.745** | **6.040** |
Comparison of motion prediction on HumanML3D dataset using motion data only.
💡 **A:** Referring to MDM, motion editing has two categories: body part editing and motion completion in the temporal domain. MotionGPT is capable of the latter, which includes motion prediction and motion in-between. It outperforms both MDM and T2M-GPT in table above. However, when it comes to body part editing, the vector quantization(VQ)-based methods, like MotionGPT and T2M-GPT, are not as suitable as diffusion-based models that utilize diffusion inpainting on raw motion data. We agree that editing body parts with LLM and prompts is a promising direction but still needs exploration.
📝 **Q: Technical contribution.**
💡 **A:** Thanks for pointing out that MotionGPT is the first work that explores motion generation with LLM. Firstly, we propose this motion-language pre-training on LLM rather than using CLIP models like previous methods. It is not a trivial combination since it needs to model and generate two distinct modes from scratch. To achieve this on MotionGPTs, we thus introduce a new training procedure: a motion-language pre-training stage and an instruction tunning stage. Secondly, we propose MotionGPT as a uniform motion-language generative pre-trained model to address various motion tasks, particularly in text-to-motion and motion-to-text. To our best knowledge, it is the first exploration of achieving such large models in the motion domain. We have developed new instruction templates and multi-task evaluation protocols, which could also contribute to the motion domain.
📝 **Q: User-study**
💡 **A:** We achieve a more detailed user study to evaluate our model's performance. For text-to-motion assessment, we generated motions for 80 HumanML3D test set descriptions, comparing MotionGPTs with MDM and T2M-GPT, alongside GT. Semantic and realism studies presented text-video pairs to participants, asking which motion **corresponded better** or was **more realistic**, respectively. In the motion-to-text study, we visualized 50 GT motions with GT descriptions and generated corresponding textual descriptions using TM2T and our method. Each participant addressed a batch of questions randomly from all questions, and 19 unqualified participants among a total of 110 samples were identified and excluded by 2 'catch trials' questions. Each video pairs were reviewed by multiple participants, with a majority vote determining superior methods. Equal scores were assigned for tied results.
| Question | MotionGPT vs MDM | MotionGPT vs T2M-GPT | MotionGPT vs GT |
|:--|:--:|:--:|:--:|
| Which of the two motions is more realistic? | 54% | 53% | 48% |
| Which of the two motions corresponds better to the text prompt? | 57% | 56% | 49% |
| Question | MotionGPT vs GT | MotionGPT vs TM2T |
|:--|:--:|:--:|
| Which description can better describe the motion? | 48% | 55% |
The results above indicate improved action quality alignment with text and motion, similar to ground truth.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal Comment
Comment: I express my gratitude to the authors for their comprehensive rebuttal. I am pleased to note that my initial concerns have been satisfactorily addressed. I have also taken into account the input from other reviewers, and it appears that no significant additional concerns have been raised. I am inclined toward recommending acceptance and will revise my score after reviewer discussion period.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the recognition of our work. Your valuable insights are greatly appreciated. Additional evaluation/ablation and corresponding explanations will be included in the final version. | Rebuttal 1:
Rebuttal: We thank all the reviewers for approvals: The idea of **unifying motion and language into tokens for uniform pre-training** is **novel and sound** (R3, R4), and this motivation is **clear and interesting** (R2). This paper provides **inspiration for future research** (R1) and **impressive demo** (R3); has **comprehensive experimental results** (R2), **extensive ablation studies** (R3, R4), and **impressive multi-task ability** (R4). We will address the concerns and fix the mixed citations.
(Reviewer rpBK - R1, Reviewer gizA - R2, Reviewer symu - R3, Reviewer hFS6 - R4)
**Motivation and Novelty** :
We present MotionGPT to address various human motion-related tasks within one single unified model, by unifying motion modeling with language through a shared vocabulary. To train this unified model, we propose an instructional training scheme under the protocols for multiple motion-language tasks, which further reveals the potential of Large Language Models (LLMs) in motion tasks beyond the success of language generation. However, it is non-trivial for this combination since it needs to model and generate two distinct modes from scratch. Contrary to the previous work leveraging CLIP to extract text embedding as motion generation conditions, like T2M-GPT, MotionGPT introduces the motion-language pre-training on LLM so it can leverage the strong language generation and zero-shot transfer abilities (See Fig.12) of pre-trained language models, as well as generates human language and motion in a unified model.
**Limited Datasets and Evaluation Metrics** : Both HumanML3D (14,616 motions) and KIT (3,911 motions) are limited in the vocabulary size and the overall dataset size, also mentioned by Reviewer symu, particularly when compared to billion-level language datasets. This hampers the efficacy of large-scale models within the motion domain. The KIT dataset, with only 3,911 motion sequences, falls short in training large models with billions of parameters, as the extracted motion vocabulary struggles to represent all potential motions. Fortunately, during this review, recent works such as Motion-X, propose significantly larger motion datasets with multi-modal annotations, which hold promise for advancing large-scale motion models. We intend to further evaluate MotionGPT on these larger datasets once they become available.
Furthermore, the metrics employed for motion-to-text evaluation, such as R-Precision in KIT, are dependent on the text encoder, which is constructed from a limited pool of 6,353 textual descriptions. In contrast, MotionGPTs benefit from LLM and large language data, enabling them to generate longer and more nature language descriptions for motion. However, this leads to a discrepancy between the generated descriptions and the ground truth (GT) descriptions, resulting in a lower R-Precision. We also notice a recent work, Text-to-Motion Retrieval (TMR), a contrastive model on text and motion, which could assist MotionGPT in more accurate text-motion evaluations.
Pdf: /pdf/d1aeffe419a3e5fdf35d603fc6adb9b2bac75740.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Regularizing Neural Networks with Meta-Learning Generative Models | Accept (poster) | Summary: This paper proposed a regularization method 'Meta generative regularization' based on the bi-level optimization frame addressed for the generative data augmentation. The MGR is consisited of two terms: pseudo consistency regularization (PCR) and meta pseudo sampling (MPS). The training using MGR is formalized as alternating optimization of a main classification model and finder network for searching latent vectors of generative model(eg, StyleGAN).To maximize the gain from synthetic samples, MGR regularizes a feature extractor part of the classification model using PCR by effectively sampling useful samples for the generalization from GAN using MPS.
Strengths: - Use pseudo consistency regularization to address the distortion of decision boundary.
- Introduce a subnetwork called a finder to improve the training of classifier, and address the unstable training of generator.
Weaknesses: - The data-driven data augmentation is not novel for the community, e.g., AutoAugment [1], Population Based Augmentation [2], Fast AutoAugment [3], ect. The proposed method is expensive in computation, and only achieve comparable or even worse performance than
hand-designed data augmentation methods, e.g., SnapMix [4]. The advantage of proposed method is not clear to readers.
- The used meta-learning technique is similar to Generative Teaching Networks [5].
- The experimental results can not support the effectiveness of proposed method. The compared method should contain other hand-designed data augmentation methods, e.g., mixup, cutmix, ect; and the data-driven data augmentation methods, e.g., AutoAugment [1], Population Based Augmentation [2], Fast AutoAugment [3], ect. Meanwhile, the hand-designed data augmentation methods SnapMix [4] can achieve a significant improvement on CUB, Cars, Aircraft datasets, while does not introduce additional expensive computation.
- What if the classfication model totally training from scratch on the synthetic samples generated by the trained finder network for the StyleGAN?
- Compared with existing data-driven data augmentation methods, proposed method is limited in transferability for other classification tasks.
[1] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le. AutoAugment: Learning Augmentation Policies from Data. In CVPR, 2018.
[2] D. Ho, E. Liang, I. Stoica, P. Abbeel, and X. Chen. Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules. In ICML, 2019.
[3] S. Lim, I. Kim, T. Kim, C. Kim, and S. Kim. Fast AutoAugment. In NIPS, 2019.
[4] Huang S, Wang X, Tao D. Snapmix: Semantically proportional mixing for augmenting fine-grained data. In AAAI, 2021, 35(2): 1628-1636.
[5] Felipe Petroski Such, Aditya Rawal, Joel Lehman, Kenneth O. Stanley, Jeff Clune. Generative Teaching Networks. ICML 2020
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses.
- Eq.(9) is wrong, since the numerator does not contain $\epsilon$.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The studied problem of this paper is somewhat out-of-date. The effectiveness is limited among existing researches. Especically, the proposed method does not show the effectiveness in some problems with sparse data, e.g., medical imaging.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments on the various points of view.
### **W1: The data-driven data augmentation is not novel. What is the advantage of the proposed method over existing data augmentation methods?**
First of all, **the novelty of our work is mainly in solving the performance degradation of generative data augmentation (GDA), not in proposing a new data-driven data augmentation method**. As discussed in Sec. 4.7, data augmentation (DA) and generative data augmentation (GDA) are independent research fields. Thus, they can be combined easily. In this regard, the comparison with DA methods is an indicator of practicality. We have shown that our MGR achieves comparable performance to DA methods and the combination achieves the best performance in Table 3. MGR can outperform the DA baselines by switching the generator to StyleGAN-XL (Table 2), indicating that it continues to improve as generative models evolve in the future. Therefore, **the advantage is a performance improvement that cannot be obtained with DA**. Note that we selected AugMix, RandAugment, and TrivialAugment as the DA baselines because they are more lightweight and powerful baselines than data-driven DA methods such as AutoAugment [f].
[f] Müller, Samuel G., and Frank Hutter. "Trivialaugment: Tuning-free yet state-of-the-art data augmentation." CVPR. 2021.
---
### **W2: Difference between the proposed method and Generative Teaching Network (GTN) [g]**
Thank you for providing related work. **MGR is different and superior to GTN in terms of (I) meta-optimization objective, (II) classifier training, and (III) computation efficiency**. First, MGR meta-optimizes only the finder network for searching optimal samples for classifier training, whereas GTN meta-optimizes entire generators. As a result, MGR avoids the overfitting caused by updating entire generators as reported [h]. Second, MGR trains a classifier with both real and synthetic samples simultaneously, whereas GTN trains it with only synthetic samples. This is also the reason why GTN cannot be a baseline of GDA. This difference comes from the difference in purpose between MGR and GTN: the former is for regularizing classifiers, and the latter is for meta-learning "data generation" for fast adaptation. For the regularization purpose, MGR is a more straightforward method than GTN and GTN could not solve the performance degradation problem of GDA. Third, MGR efficiently computes the objective function through approximating the second-order gradient (Eq. (9)), whereas GTN na\"ively computes meta-gradients. We will add this discussion to related work.
[g] Such, Felipe Petroski, et al. Generative teaching networks: Accelerating neural architecture search by learning to generate synthetic training data. ICML. 2020.
[h] Tero Karras, et al. Training generative adversarial networks with limited data. NeurIPS. 2020
---
### **W3: The experimental results can not support the effectiveness of proposed method. The compared method should contain other hand-designed DA methods, e.g., Mixup, CutMix, SnapMix.**
We would respectfully point out that **we have shown the effectiveness in comparison with DA methods in Sec. 4.7 and Table 3**. Additionally, we provide the results with Mixup and CutMix on Table R-3, indicating **our MGR outperformed Mixup, CutMix, and SnapMix**. Please see also the general response for more details. We will add the results to the paper.
---
### **W4: What if the classification model totally training from scratch?**
Thank you for the comment. **MGR can stably perform even when training classifiers from scratch**. Table R-10 shows the same trend as Table 1. We will add the results to the paper.
**Table R-10. Classification on Cars (Scratch ResNet-18)**
||Top-1 Acc.|
|:-|:-|
|Base Model|64.29$\pm$.40|
|GDA|62.93$\pm$.82|
|MGR|70.62$\pm$.43|
---
### **W5: The proposed method is limited in transferability for other classification tasks**
**We have confirmed that MGR can stably perform on multiple classification datasets in Sec. 4.2 and Table 1 (a)**. Therefore, we consider that MGR has task-wide transferability. If we misunderstand something, we would be happy if you could provide additional comments.
---
### **Q1: Eq. (9) is wrong since the numerator does not contain $\epsilon$**
We appreciate you for pointing out this. In L138, $\theta^{\pm}$ was incorrectly defined and caused confusion. The correct definition is $\theta^{\pm} = \theta \pm \epsilon\eta\nabla_\theta\mathcal{L}_\mathrm{val}(\theta)$. With this modification, $\epsilon$ appears in the numerator, and Eq.(9) becomes the correct definition.
---
### **L1: The studied problem of this paper is somewhat out-of-dated**
We respectfully disagree with this opinion. As the research workshop was held in NeurIPS 2022 [i], machine learning with synthetic samples is an on-going active research topic, not out-of-dated. We believe that our contribution provides new options for the use of synthetic samples and will significantly help develop this research trend.
[i] SyntheticData4ML Workshop. NeurIPS 2022.
---
### **L2: The proposed method does not show the effectiveness in medical imaging.**
We are afraid that it is not the case. We evaluated our method on the Chaoyang dataset [j], which is a medical imaging dataset for classifying cancers. Table R-11 shows the result. We confirm that **our MGR effectively performs on medical imaging dataset**.
**Table R-11. Classification on Chaoyang (ResNet-18, StyleGAN2-ADA)**
|| Top-1 Acc.|
|:- | :- |
|Base Model| 83.56$\pm$.57|
|GDA|84.23$\pm$.57|
|MGR|**87.48**$\pm$**.15**|
[j] Zhu, Chuang, et al. Hard Sample Aware Noise Robust Learning for Histopathology Image Classification. IEEE transactions on medical imaging.
---
Rebuttal Comment 1.1:
Title: The concerns are not perfectly addressed
Comment: Thanks for your detailed response, most of them has addressed the concerns. However, the major concern is that proposed method need to employ real samples and synthetic samples to train the classifier. However, existing data augmentation methods are easy and lightweight to improve the training. While the proposed method is expensive and cumbersome. Compared with data-driven data augmentation methods, proposed method can only be used for studied task at hand, and can not be transferred to new tasks. In the transfer stage, data-driven data augmentation methods is efficient and lightweight. Another concern is the proposed method use the data-augmentaion consistency regularization trick, which is a strong improvement strategy. It is unfair that compared methods do not use such trick to further improve performance. Besides, for data-scarce tasks, e.g., few/zero-shot learning, generated images have obtained SOTA performance, e.g., [1] etc.
[1] He R, Sun S, Yu X, et al. Is synthetic data from generative models ready for image recognition?In ICLR, 2023.
---
Reply to Comment 1.1.1:
Title: Response for the remaining concerns
Comment: Thank you for your timely response and additional explanations of your concerns.
> Thanks for your detailed response, most of them has addressed the concerns.
We are pleased that our response addressed most of your concerns.
> the major concern is that proposed method need to employ real samples and synthetic samples to train the classifier.
This is one of the limitations not only of our method but of all GDA-like approaches, which utilize synthetic samples as additional data. **We believe that our method is worth the additional cost; it consistently improves the baseline models trained on only real samples**, as shown in all experiments of the paper and rebuttal (e.g., Table R-2). We would be very happy if you kindly find this positive aspect of our work.
> While the proposed method is expensive and cumbersome. Compared with data-driven data augmentation methods, proposed method can only be used for studied task at hand, and can not be transferred to new tasks. In the transfer stage, data-driven data augmentation methods is efficient and lightweight.
Thank you for the additional explanations. As described in L115-119 of Sec. 3.2, the concept of MPS is to dynamically find useful samples for training classifiers that predict $p(y|x)$. This is intended to achieve task-specific generation and not to generalize across tasks. Even so, the question you implied (does the pre-trained finder transfer between tasks?) is interesting as it could be a hint to improve the efficiency of our method. To investigate this, we evaluated the transferability of a finder pre-trained on ImageNet. Table R-12 shows the result on Cars when we utilize the pre-trained finder on ImageNet without meta-optimization on the task. Surprisingly, the fixed ImageNet pre-trained finder improved PCR models. **Although the best performance was achieved by MGR (PCR + MPS), this indicates that pre-trained finders have the potential of transferability and computation efficiency**. This can be because the sampling strategy learned by the finder in ImageNet (including car images) is partially useful in Cars. In future work, we will try to skip the meta-optimization in this direction and reduce the computation costs. Thank you again for this constructive comment.
**Table R-12. Classification on Cars (ResNet-18)**
|| Top-1 Test Acc.|
|:-|:-|
|Base Model|85.80$\pm$.10|
|PCR|86.36$\pm$.08|
|PCR + Pre-trained Finder|86.97$\pm$.03|
|MGR|**87.22**$\pm$**.15**|
> Another concern is the proposed method use the data-augmentaion consistency regularization trick, which is a strong improvement strategy. It is unfair that compared methods do not use such trick to further improve performance.
Thank you for the additional concern. Since the consistency regularization (CR) method on synthetic samples (i.e., PCR) is a part of our proposed method, the comparison with other methods that do not use such CR techniques originally is fair. Nevertheless, confirming the effect of CR on real samples is important to see the difference between synthetic and real samples. Table R-13 shows the results. **Our method was superior to CR on real samples, which means that the meta-optimized synthetic samples are preferred to regularize the classifiers**. We will add these results to the paper.
**Table R-13. Classification on Cars (ResNet-18)**
|| Top-1 Test Acc.|
|:-|:-|
|Base Model| 85.80$\pm$.10|
|CR on real data|86.16$\pm$.02|
|SnapMix|87.11$\pm$.20|
|SnapMix + CR on real data|88.16$\pm$.03|
|**MGR (StyleGAN-XL)**|**88.37**$\pm$**.20**|
|**SnapMix + MGR (StyleGAN-XL)**|**90.15**$\pm$**.07**|
> Besides, for data-scarce tasks, e.g., few/zero-shot learning, generated images have obtained SOTA performance, e.g., [1] etc.
The method of [1] utilizes pre-trained text-image generative models (e.g., Stable Diffusion), indicating it is a transfer learning method using synthetic samples. Recent works [k,l] also shows a transfer learning method utilizing both real and synthetic samples from Stable Diffusion in the GDA fashion. In contrast to these methods, our method is a regularization for data-scarce tasks that is independent of neither source datasets nor pre-trained generative models. We would respectfully point out that our work has a different research question from these works: solving the performance degradation problem of generative data augmentation. Thus, our contribution is fundamental for leveraging synthetic samples and can be helpful for enhancing the transfer learning methods in future work.
[k] Dunlap, Lisa, et al. "Diversify your vision datasets with automatic diffusion-based augmentation." arXiv preprint arXiv:2305.16289 (2023).
[l] Burg, Max F., et al. "A data augmentation perspective on diffusion models and retrieval." arXiv preprint arXiv:2304.10253 (2023). | Summary: The paper proposed meta-generative regularization (MGR) for improving generative data augmentation. MGR is optimized by alternative training between the main and finder network. To train the main network, contrastive learning is used. To train the finder network, the authors propose the bilevel optimization and approximate the solution.
Strengths: 1. The proposed method does not update the generative model, but rather updates the finder network. Thus, the foundation generative model can be used.
2. I think that the proposed method does not depend on the type of generative models such as normalizing flow, auto-regressive, gan, and score-based generative models.
3. FDM to reduce the computation complexity.
4. The effectiveness of the reduced datasets
Weaknesses: 1. CutMix [A] and MixUp [B] rather than AugMix are the strong baselines. These methods are simple, easily implemented, no training, and effective. A comparison is needed.
2. In Sec. 4.4, I am not sure about the interpretation of UMAP. From the visualization, the diversity of the proposed method is reduced. Thus, the loss term of contrastive learning, L_PCR, might be important. How much the performance is improved when the main model is trained on cross entropy and contrastive loss using only real data?
3. The result of diffusion model is in the supplementary. However, the experiment of recent generation models, including text-to-image generator, makes the paper more convincing because it shows promising results [C, D, E, F].
4. How does this method affect the evaluation in terms of robustness, generalizability, or bias? These is possibility to generate the biased samples.
[A] CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features, ICCV 2019
[B] mixup: Beyond Empirical Risk Minimization, ICLR 2018
[C] IS SYNTHETIC DATA FROM GENERATIVE MODELS READY FOR IMAGE RECOGNITION?, ICLR 2023
[D] Fake it till you make it: Learning transferable representations from synthetic ImageNet clones, 2023
[E] Synthetic Data from Diffusion Models Improves ImageNet Classification, 2023
[F] Generative models improve fairness of medical classifiers under distribution shifts, 2023
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weaknesses.
===
I update my rate from 4 to 5 because my concerns are addressed and the authors will reflect the discussion below to the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation is described in Sec. 6 in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive comments and suggestions.
### **W1: Comparing MGR with CutMix and MixUp**
Thank you for this suggestion. We provide the comparison in Table R-3. **MGR outperformed CutMix and Mixup**. As well as the cases of other DA methods, **the combination of MGR and CutMix/Mixup produces further improvements**. We will add these results in Table 3 of the paper. We would appreciate it if you take a look at the general responses for more details.
---
### **W2: Interpretation of UMAP visualization and the performance of consistency regularization loss with real data**
> In Sec. 4.4, I am not sure about the interpretation of UMAP. From the visualization, the diversity of the proposed method is reduced.
Thank you for the comments. For a more straightforward interpretation, we additionally tried the UMAP visualization on a simple binary classification task (Fig. I in the attached PDF). MGR clearly separates the clusters for each class and forms each cluster to be dense, indicating that **MGR reduces the distance between samples in the same class rather than reduces the diversity in the entire feature space**.
> the loss term of contrastive learning, L_PCR, might be important. How much the performance is improved when the main model is trained on cross entropy and contrastive loss using only real data?
We agree with your comment: $\mathcal{L}_\text{PCR}$ plays an important role in regularizing classifiers. We compared our methods and the consistency regularization with real data (CR on real data) in Table R-2. **Our PCR and MGR outperformed CR on real data**. This implies that synthetic samples help train classifiers with useful features that are not in the real data. Further, we would emphasize that MPS also contributes to performance improvements by meta-learning, which could not be achieved by CR with real samples. Please also see the general response for more details.
---
### **W3: On leveraging pre-trained text-to-image generator**
> the experiment of recent generation models, including text-to-image generator, makes the paper more convincing because it shows promising results [C, D, E, F].
Thank you for the comments. Indeed, the studies of training classifiers with text-to-image generators are increasing and have become popular in the research community. However, we would respectfully point out that our work has a different research question from these works: solving the performance degradation problem of generative data augmentation. The text-to-image generators such as Stable Diffusion used in [C] are pre-trained on large-scale multi-modal datasets. This means that using these models contains the effect of transfer learning; transfer learning across datasets is not included in our claims. Thus, to separate the effects of our method and transfer learning, we focused on the generative models trained on target data only. Since our results show that the synthetic samples are useful even when the target dataset is only available, our contribution is fundamental for leveraging synthetic samples. In future work, we will develop sampling methods taking transfer learning and text-prompting into account. We will add the above discussions in the Limitation section.
On a related note, the performance study when using fine-tuned generators is discussed in the response for Reviewer g8gt (Q1). Please take a look if you are interested.
---
### **W4: How does MGR affect to robustness, generalizability, or bias? These is possibility to generate the biased samples.**
Thank you for your interesting comments. In short, **our method can improve the robustness against natural corruption and MPS does not generate biased samples toward training distribution.** We tested the robustness of our method on CIFAR-10-C [e], which is a test set for CIFAR-10 corrupted by various transformations. Table R-9 shows the results. While GDA degraded the performance for all corruptions, PCR significantly improved the base model. Furthermore, MGR achieved even higher robustness. This indicates MPS in MGR can provide samples that are useful for generalization through meta-optimization. We will add this result to the paper.
**Table R-9. Classification on CIFAR-10-C (severity: 5)**
| | clean | gaussian noise | shot noise | impulse noise | defocus blur | glass blur | motion blur | zoom blur | snow | frost | fog | brightness | contrast | elastic transform | pixelate | jpeg compression | mean |
| :--------- | :---- | :------------- | :--------- | :------------ | :----------- | :--------- | :---------- | :-------- | :---- | :---- | :---- | :--------- | :------- | :---------------- | :------- | :--------------- | :---- |
| Base Model | 86.49 | 53.32 | 53.12 | 39.01 | 34.06 | 37.15 | 31.08 | 36.14 | 46.68 | 36.38 | 19.14 | 52.45 | 10.51 | 41.14 | 46.59 | 53.02 | 42.27 |
| GDA | 84.11 | 52.70 | 52.98 | 39.94 | 29.39 | 38.66 | 28.97 | 28.82 | 45.04 | 40.08 | 24.00 | 50.29 | 13.07 | 40.91 | 45.11 | 52.85 | 41.68 |
| PCR | 87.06 | 59.35 | 60.65 | 37.71 | 47.51 | 50.89 | 41.71 | 47.11 | 63.94 | 57.33 | 31.13 | **73.43** | 13.66 | 58.00 | 63.77 | 68.55 | 53.86 |
| **MGR** | **88.02** | **61.69** | **62.53** | **41.62** | **52.21** | **51.91** | **47.94** | **51.09** | **65.78** | **59.50** | **34.49** | 73.23 | **14.83** | **60.65** | **65.32** | **71.10** | **56.37** |
[e] Hendrycks, Dan, and Thomas Dietterich. "Benchmarking neural network robustness to common corruptions and perturbations." International Conference on Learning Representations (2019).
---
Rebuttal Comment 1.1:
Title: Reviewer gssr
Comment: Dear Reviewer,
The author has posted their rebuttal, but you have not yet posted your response. Please post your thoughts after reading the rebuttal and other reviews as soon as possible. All reviewers are requested to post this after-rebuttal-response. | Summary: The authors propose a method for using synthetic images from GANs to augment training image classifiers. The naive approach to this problem is to generate samples for each class and treat these as supervised examples, but this can degrade performance due to image artifacts. Instead, the authors propose to use the generated data only for consistency regularization of the featurizer (”PCR”). Further, they meta-learn the codes to use for generation (”MPS”). On 6 datasets, the authors see performance improvements from using synthetic data this way, rather than performance drops.
Strengths: - The motivation is clear, the method is clean, and the approach makes intuitive sense. The overall paper presentation is very good.
- The authors select a nice set of baselines and show their method outperforms them. In particular, GDA + MH was an important comparison to have.
- The paper contains a solid set of ablations, including: the importance of PCR v. MPS (either can improve performance, both are best), the effect of MPS on sample quality as measured by FID, experiments with multiple generative models for one dataset, and experiments showing that gains are additive with standard data augmentations.
Weaknesses: - A missing baseline is consistency regularization without generative data augmentation, i.e. equation 5 applied to real examples $\mathcal D$ rather than $\mathcal D_p$. How does the method compare to training with this objective, and are gains additive?
- Standard training with TrivialAugment / RandAugment outperform this method on Cars, presumably with much lower computational cost — though the authors do show these can be combined with the method to obtain further gains. I’d like to check if this trend holds for other datasets as well.
- Because of the metalearning loop, the method is too slow to apply with diffusion models, as noted in the Limitations and Appendix B.3. All experiments are done with GANs trained from scratch on the target datasets.
- Experiments are done on relatively small datasets. It would be interesting to compare this method on ImageNet.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: A point of confusion for me is why experiments avoided using pretrained generative models / finetuning from them. As noted in the related work, massive pretraining may help produce higher quality samples. The authors attribute degradations after training on generated data to *class leakage*, i.e. samples where artifacts of multiple classes are combined into the same image, such as the tennis ball with a dog face generated by BigGAN. This seems like the kind of artifact that improved generative models can remove. Does generative data augmentation with large pretrained models result in such performance drops? Have the authors tried their method with finetuned GANs?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors note that the meta-learning loop renders the method unusable for computationally intensive generative models, e.g. diffusion models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your carefully reading and thoughtful feedback.
### **W1: How does the proposed method compare to the method using consistency regularization on real samples?**
Thank you for your insightful suggestion. Although the consistency regularization with real samples brings some improvement, **the proposed method using synthetic samples and meta-learning outperforms this baseline**. We would emphasize that our method has the advantages of performance improvements by meta-learning of the sampling networks and switching the generative model to a better one, which could not be achieved by CR with real samples. For details, we would be happy if you take a look at the general response and Table R-2.
---
### **W2: What if comparing our method with DA on other datasets?**
Thank you for the reasonable comment. Table R-6 and R-7 shows Aircraft and Birds; DA and our MGR were competitive and the combination achieved further accuracy improvements. Thus, **our method provides practical performance regardless of the datasets**. MGR outperformed the DA methods solely for some datasets and consistently achieved the best accuracy when it is combined with DA. We will add these results to the paper.
**Table R-6. Classification on Aircraft (ResNet-18)**
| |No TDA|AugMix|RandAug|Trivial Aug|
| :- | :- | :- | :- | :- |
|Base Model|62.61$\pm$.79|64.53$\pm$.70|63.12$\pm$.52|66.14$\pm$.24|
|MGR|65.11$\pm$.57|65.65$\pm$.12|64.98$\pm$.09|68.17$\pm$.34|
**Table R-7. Classification on Birds (ResNet-18)**
|| No TDA | AugMix | RandAug | Trivial Aug |
|:-|:-|:-|:-|:-|
|Base Model|72.24$\pm$.32|72.44$\pm$.11|70.80$\pm$.37|73.61$\pm$.19|
|MGR|74.24$\pm$.34|74.92$\pm$.62|71.14$\pm$.23|75.02$\pm$.29|
---
### **W3: Limitations of the computational cost of meta-learning**
We thank you for carefully reading Limitation and Appendix B.3. As we mentioned in Limitation, diffusion models are rapidly speeding up by recent intensive efforts. We can easily imagine that diffusion models achieve a sampling speed comparable to GANs in the near future because the speed-up issue is also important in other applications, such as real-time rendering. For example, a recent work [d] proposed a diffusion-like generative model called consistency model, which achieves high-quality samples (6.2 of FID in ImageNet) in a few steps. In this perspective, the limitation of training speed on diffusion models will be resolved in the near future.
[d] Song, Yang, et al. "Consistency Models." International conference on machine learning (2023).
---
### **W4: It would be interesting to compare this method on ImageNet**
Thank you for the suggestion. We provide the ImageNet results in Table R-1 in the general response. We confirm that **our method stably performs even on the large-scale dataset**. We would be happy if you could see the general response for more details.
---
### **Q1: Does naive generative data augmentation with large pretrained models result in such performance drops? Have the authors tried their method with finetuned GANs?**
**Yes and Yes. We have tried to use ImageNet pre-trained BigGAN and confirmed that this does not solve the degradation problem of GDA (Table R-8). However, the use of pre-trained models can enhance our method.** We omitted this result from the paper to separate the proposed method's effects from the transfer learning effects by the pre-trained models. Even so, since it is an important fact that fine-tuning does not solve the problem, we will add this result to Appendix. Thank you for pointing out this.
**Table R-8. Classification on Cars (ResNet-18, ImageNet pre-trained BigGAN)**
|| 10% | 25% | 50% | 100%
-- | -- | -- | -- | --
Base Model|20.11$\pm$.03|49.33$\pm$.54|72.91$\pm$.38|85.80$\pm$.18
GDA|18.82$\pm$.22|46.38$\pm$.59|70.23$\pm$.66|86.11$\pm$.16
MGR|24.55$\pm$.24|53.41$\pm$.89|75.46$\pm$.12|87.17$\pm$.18
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their response. I believe my concerns have been thoroughly addressed.
---
Reply to Comment 1.1.1:
Comment: Thank you for reading our rebuttal! We are glad to hear that the response addressed your concerns. Thank you again for your constructive and thoughtful feedback. | Summary: This paper leverages synthetic images from generative models to train classifier models, effectively using them as an augmentation tool. However, instead of incorporating these images in a simplistic fashion, the synthetic images are utilized as a regularizer. The paper posits that synthetic samples may not always perfectly mirror class categories found in real data distribution. As a response, a meta-learning framework is employed to dynamically ascertain which synthetic samples should be used to minimize validation losses. The paper introduces a feature-based consistency regularization loss for the generated images, a method similar to self-supervised techniques. The unique contribution of this paper is the proposition of a finder network that is trained within a meta-learning context, this network is designed to select images that will enhance the performance of the classification model. Through extensive experimentation, the paper illustrates that the proposed methodology can effectively circumvent the performance deterioration often linked with naive generative data augmentation while concurrently improving the baselines.
Strengths: * The paper is well-presented and easy to follow.
* The paper presents a novel sampling network that is trained in a meta-learning setting to select synthetic images that will enhance the performance of the classification model.
* Through a series of experiments, the paper successfully illustrates the effectiveness of the overall approach, as well as the contributions of different aspects of the methodology, i.e., consistency regularization loss and Meta Pseudo Sampling.
* The proposed generative model augmentation method complements conventional data augmentation methods and can be employed together to enhance performance further.
Weaknesses: * The paper showcases its experiments predominantly on simple, fine-grained classification datasets. To bolster the credibility of their methodology, the authors would benefit from demonstrating how their approach enhances classification performance on more complex, large-scale datasets, such as ImageNet.
* In Table 1, I suggest including an additional baseline: the base model + SSL. This means applying a consistency regularization loss on the real training images without utilizing any synthetic images. This will highlight the degree of improvement achieved using the generative model and the proposed model. The author acknowledges that training the sampling network in a meta-learning setting is computationally expensive, so this comparison would clarify whether the benefits of this approach justify its cost, particularly if the gains are minimal.
* For the consistency loss, the paper solely employs augmentation in the image space. However, one might ask why not also utilize augmentation in the generative model's latent space by introducing small noise to the latent vector z. Given that the latent space of the generators implicitly represents the image manifold, performing augmentation in this latent space could lead to meaningful augmentation in the image space. Such augmentation could potentially be more challenging to achieve using conventional image augmentation methods.
* Figure 5 could be improved further by incorporating a visualization of different class features along the decision boundary of the classifier. This enhancement would provide further insight into how these synthetic images influence the decision boundary under different cases.
* (Just a suggestion) In lines 45-50, I would like to introduce an additional argument. Despite generative models being trained to model p(x), they frequently fail to capture the entire distribution of the training set, focusing mainly on the high-density region of the distribution. Therefore, the information extracted from these generative models is often less comprehensive than the information found in the actual dataset distribution. Furthermore, these synthetic images frequently exhibit disfigured object shapes; for instance, a majority of human shapes in StyleGAN-XL images appear distorted.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: *The major novelty of the paper resides in the proposition of a sampling network, trained via a meta-learning method, to generate synthetic samples aimed at enhancing the classification model's performance. The authors further highlight that a superior FID score from the generative model correlates to improved performance in classification models (Table 2), and the FID score derived from their sampling network significantly surpasses that of uniform sampling (Fig 6b). In the generative models' literature, there exist works that advocate for effective sampling strategies from pre-trained GAN models, utilizing the multimodal truncation method, such as Makady et al. 2022's 'Self-Distilled StyleGAN: Towards Generation from Internet Photos.' How would their sampling approach fare when combined with the consistency regularization loss? It could significantly bolster the paper's case if authors demonstrated that the proposed sampling network exceeds the performance of other sampling methods, like the multimodal truncation method.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The author themselves pointed out that the limitation of the method is the requirement of bilevel optimization of classifier and finder networks. This optimization is computationally expensive and may be less practical when used in generative models that require multiple inference steps, such as diffusion models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your careful reading and many insightful comments.
### **W1: Is the proposed method effective on more complex, large-scale datasets such as ImageNet?**
**Yes.** By following your suggestion, we evaluated our methods on ImageNet. We confirmed the same trend as Table 1 even on ImageNet, which is a large-scale and complex dataset. Please refer to the general response and Table R-1.
---
### **W2: Additional baseline of consistency regularization loss with real data**
Thank you for this suggestion. We examined this baseline in Table R-2 in the above general response and found that **the consistency regularization (CR) with synthetic samples with meta-learning is more effective than one with real samples**. We would emphasize that our method has the advantages of performance improvements by meta-learning of the sampling networks and switching the generative model to a better one, which could not be achieved by CR with real samples. From these facts, we conclude that the proposed approach is beneficial for the cost.
---
### **W3: Why not also utilize augmentation in the generative model's latent space?**
This is a very interesting idea. We implemented this approach by adding Gaussian noise to the latent vector as $z^\prime = z + s$ where $s\sim\mathcal{N}(0,10^{-3})$. Then, we compute the consistency regularization between $g(G(z))$ and $g(G(z^\prime))$, instead of $g(G(z))$ and $g(T(G(z)))$. We call this variant Latent Augmentation. We found that Latent Augmentation improves the performance of our MGR (Table R-4). Interestingly, Latent Augmentation can improve when it is used solely without image data augmentation $T$ (RandAugment). This indicates that we can obtain meaningful variants by perturbing latent vectors, which is challenging for conventional data augmentation, as you said. However, this approach doubles the number of generators' forward computations and thus leads to increased computation time and memory footprint. We would like to present the latent augmentation technique in the paper as a promising option to improve MGR.
**Table R-4. Classification on Cars (ResNet-18)**
|| Top-1 Acc.|
| :- | :- |
|Base Model| 85.80 $\pm$ .10 |
|MGR (Latent Augment)| 86.49 $\pm$ .33 |
|MGR (RandAugment)| 87.22 $\pm$ .15|
|MGR (RandAugment + Latent Augment)| 87.85 $\pm$ .53 |
---
### **W4: UMAP visualization should be improved (Fig. 5)**
Thank you for the helpful comments. According to your suggestion for a more straightforward visualization of the decision boundary, we conducted the additional visualization study by using a binary classification dataset created by modifying Pets. In Fig. I of the attached PDF, we can see how the proposed method effectively enlarges the margins of the class boundaries. Please also see the general response. We would be happy to receive any comments on the modified visualization results.
---
### **W5: Suggestion of additional explanation for the second hypothesis (L45-50)**
> Despite generative models being trained to model p(x), they frequently fail to capture the entire distribution of the training set, focusing mainly on the high-density region of the distribution. Therefore, the information extracted from these generative models is often less comprehensive than the information found in the actual dataset distribution. Furthermore, these synthetic images frequently exhibit disfigured object shapes; for instance, a majority of human shapes in StyleGAN-XL images appear distorted.
We appreciate your detailed suggestion with the concrete instance. It makes sense that the na\"ive synthetic samples lack detailed information due to focusing on the high-density regions. In fact, from our sample visualization results in Fig. 7, it appears that uniform sampling produces samples with similar features, not comprehensive. The proposed method seems to reduce the negative effect of such disfigured samples by PCR and remove the disfigured samples by MPS. We would gladly incorporate your suggestions into the paper!
---
### **Q1: How would multi-modal truncation sampling [c] fare when combined with PCR?**
Thank you for the question. Through the additional experiments, we found that multi-modal truncation sampling is somewhat helpful for improving PCR, but MPS is more effective. Table R-5 shows the result of combining multi-modal truncation sampling and PCR. **Although the FID score of multi-modal truncation sampling certainly outperforms MPS, the gain of the classification accuracy underperforms MPS**. This implies that improving sample quality is a necessary condition for performance improvements, not a sufficient condition. Meanwhile, since MPS explicitly searches for samples that minimize the validation loss, it can improve classifiers more directly than incorporating existing sampling methods. We will add this analysis to the paper.
**Table R-5. Classification on Cars (ResNet-18)**
|| Running Mean FID | Top-1 Acc.|
| :- | :- | :- |
|PCR|22.96| 86.36$\pm$.08|
|Multi-modal truncation sampling + PCR|21.12|86.51$\pm$.21|
|MGR (MPS + PCR)|22.08|87.22$\pm$.15|
[c] Mokady, Ron, et al. "Self-distilled stylegan: Towards generation from internet photos." ACM SIGGRAPH 2022 Conference Proceedings. 2022.
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed rebuttal
Comment: My apologies in not responding earlier. Thank you very much for taking the time and answering my questios. I have also gone through the other reviewers comments and I am now willing to change my score and recommend acceptance of the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for reading our rebuttal and updating the score! We sincerely appreciate again your constructive and insightful suggestions, such as additional baselines and explanations of our hypothesis. | Rebuttal 1:
Rebuttal: # General Response
We greatly appreciate the reviewers for providing many constructive and insightful comments. We are happy to find all reviewers give scores of 3 (good) or better for the soundness and presentation of our paper. We are also pleased that most reviewers recognize the effectiveness of our method in the Strength section. The feedback from the reviewers concentrates on the suggestions to add experiments to enhance our claims. Thus, we have conducted as many additional experiments as possible to address the concerns. The evaluations on new datasets (e.g., **ImageNet** and **medical imaging**) and baselines (e.g., **CutMix** and **Mixup**) further clarify the performance superiority of our method besides our original claims. For details, please refer to the individual responses.
In the rest of this response, we provide the parts of additional experiments for answering the shared concerns among the reviewers.
## **Additional experimental results for shared concerns**
---
### **Is the proposed method effective on ImageNet? (Reviewers ZFum and g8gt)**
**Yes, our method (MGR) improves top-1 accuracy on ImageNet.** We evaluated our meta generative regularization (MGR) on ImageNet by randomly initialized ResNet-18 as the classifier and the pre-trained BigGAN as the generator. The experimental setup follows [a]. Table R-1 shows that MGR successfully improves top-1 accuracy, while naive generative data augmentation (GDA) does not; this is the same trend as Table 1 (a) of the main paper. This result indicates that our method consistently works on complex and large-scale datasets. We will add the result in multiple trials with standard deviation to the paper.
**Table R-1. ImageNet Classification (ResNet-18)**
||Top-1 Acc. (ImageNet Val.)|
| :-- | :--|
|Base Model |68.10|
|GDA|64.74|
|MGR (Ours)|**70.55**|
[a] Pytorch - Models and Pre-trained Weights
---
### **What if using real data instead of synthetic samples in consistency regularization? (Reviewers ZFum and g8gt)**
Reviewers **ZFum** and **g8gt** recommended assessing the models' performance with the consistency regularization (CR) computed on *real* datasets. We agree that this baseline is important to evaluate the value of synthetic samples in the regularization. Table R-2 shows the performance of the real consistency regularization (we refer to CR on real data) on Cars dataset; the setting is shared with Table 1 of the main paper. For CR on real data, we used the training dataset for computing Eq. (5). **Our pseudo consistency regularization (PCR) slightly outperformed CR on real data**. This can be because interpolation by the generators helps the feature extractor to capture the difference between images. **Furthermore, PCR enables MGR to search the optimal synthetic samples by using latent vectors of generative models**. In contrast, CR on real data cannot search the optimal real samples for CR loss because real data is fixed in the data space. We will add CR on real data results to Table 1 and the above discussions to the paper.
**Table R-2. Classification on Cars (ResNet-18)**
|| Top-1 Test Acc.|
| :--------- | :-------------- |
| Base Model | 85.80 $\pm$ .10 |
| CR on real data | 86.16 $\pm$ .02 |
| PCR | 86.36 $\pm$ .08 |
| MGR | 87.22 $\pm$ .15 |
| MGR (StyleGAN-XL)| **88.37** $\pm$ **.20**|
---
### **Can the proposed method outperform CutMix, MixUp, and SnapMix? (Reviewers gssr and Ja7o)**
**Yes.** We evaluated CutMix, MixUp, and SnapMix on Cars with ResNet-18 (Table R-3). Our MGR outperformed them. Additionally, MGR boosted the performance of them. This trend is consistent with Table 3. This result also indicates that synthetic samples, which non-linearly interpolate the real samples, elicit better performance than linearly interpolating real samples in the input space by CutMix, Mixup, and SnapMix. We consider this strength is an advantage of using generative models.
**Table R-3. Classification on Cars (ResNet-18)**
||Top-1 Acc.|
|:-|:-|
|Base Model|85.50$\pm$.10|
|CutMix|86.13$\pm$.19|
|Mixup|86.87$\pm$.30|
|SnapMix|87.11$\pm$.20|
|**MGR**|87.22$\pm$.15|
|**MGR+CutMix**|87.80$\pm$.51|
|**MGR+Mixup**|87.60$\pm$.46|
|**MGR+SnapMix**|88.21$\pm$.13|
---
### **Additional UMAP visualization on binary classification dataset (Reviewers ZFum and gssr)**
Reviewers **ZFum** and **gssr** commented on the modification and interpretation of UMAP visualization in Fig. 5. For a more straightforward visualization, we designed a simple binary classification task that separates Pets [b] into dogs and cats, and visualized the feature space of a model trained on this task. Fig. I in the attached PDF shows the feature space after one epoch of training. While GDA failed to separate the clusters for each class, **MGR clearly separates the clusters**. Looking more closely, MGR helps samples to be dense for each class. This is because PCR makes the feature extractor learn slight differences between the synthetics samples that interpolate the real samples.
[b] Parkhi, Omkar M., et al. "Cats and dogs." 2012 IEEE conference on computer vision and pattern recognition. IEEE (2012).
---
Pdf: /pdf/1c1dae32266c3d87542c91da81e58979bb26a814.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Joint Training of Deep Ensembles Fails Due to Learner Collusion | Accept (poster) | Summary: The authors aim to answer why joint training of ensembles does not work as well as separately training the members of the ensembles, before ensembling them. This is a well-known empirical fact, but the authors attempt to give a theoretical understanding of it, which is novel to my knowledge.
To reach their objective, they consider an expression for diversity (DIV) for probability averaging, which they motivate thru a (known) regression example. Then they generalize that expression to any twice differentiable loss function.
The authors study DIV in more depth by (upper) bounding it, then using it as a tunable regularization tool to interpolate between joint training and separate training of ensembles.
Then the authors hypothesize that jointly training the ensemble may artificially bump up DIV without yielding proper generalization. They say that this happens because of "learner collusion" a phenomenon where two learners are displaced artificially by a constant bias term without changing the overall performance of the ensemble. They devise empirical measures to test that hypothesis (by studying generalization gaps, diversity explosion, learner codependence etc.)
Strengths: S1: The paper is really well written. Thanks to the authors for building my reading experience so well.
S2: The explanation of the phenomenon (to my knowledge) is novel, and gives me a new way of thinking about the well-known failure case of jointly training ensembles.
S3: It is great that the authors have theory that is paralleled with experiments on real-world data.
Weaknesses: W1: Not sure I understand what happens when we vary $\beta$. I think the paper can improve in a more detailed discussion about a more thorough study of the effect of collusion in the intermediate values of $\beta$. See my Q2 in the questions below.
W2: Typically, we expect to be able to devise better models after we have a better understanding of an underside phenomenon. The paper lacks a discussion about this. It's fine to leave it as future work, but it would be useful to discuss some directions a bit.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Q1: In Figure 2, why does Facebook seem to be different from the overall trend?
Q2: If we get $\beta \approx 1$, then great, we have increased diversity and this gives us some hints as to why joint training might fail. However, as we vary $\beta$, I do not gain any intuition. Is the collusion effect a monotonic one? It seems so from Figure 6. If there is some $\beta > 0$ then do we always have that effect? Shouldn't intuition tell us that there should be a "sweet spot" for $\beta$? If not, then why would we be interested in training learners jointly to any extent? For example, could we do something more explicit to prevent the learner's collusion?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank this reviewer for their constructive feedback. We have addressed the points raised below which have helped us to further improve our submissions clarity and contribution.
$~$
* _Further discussion on the effects of varying $\beta$_ - The non-linear relationship between $\beta$ and the mean squared error is indeed a notable phenomenon that we only addressed in the appendix. Let us provide a more succinct discussion on this point.
$~$
Figures 2-4 do highlight a sudden jump in learner collusion as $\beta \to 1$ in the case of regression rather than a gradual increase. Why does this occur? A useful explanation lies in the gradient-adjusted target (GAT) analysis in App D. To briefly summarize, this section considers applying individual training but with an adjusted target which is a function of the gradient of the ensemble error. Specifically, we adjust the targets to account for the errors of the ensemble. Formally, we optimize $\frac{1}{M}\sum_{j=1}^M (f_j - \bar{y})^2$ where $\bar{y}$ is the adjusted target given by $\bar{y} = y - \alpha \cdot g$. Here $g$ is the ensemble loss gradient and $\alpha$ is a step size parameter determining how big a step the target of an individual learner should take in order to account for the errors of the ensemble. In Thm. D.1 we show that this GAT objective is exactly equivalent to our augmented objective in Sec. 5 with $\beta = (1 - \frac{1}{(1 + \alpha)^2})$. Given this relationship, we might notice that the gradient step $\alpha$ grows exponentially as $\beta \to 1$ resulting in each individual learner having their targets biased excessively to account for the ensemble errors (**please see the new figure in the attached PDF visualizing this relationship**). Given this perspective, it is unsurprising that learner collusion does not grow linearly with $\beta$.
$~$
The reviewer's question regarding a "sweet spot" of $\beta$ for optimizing diversity is an interesting one. Considering that joint training only occurs at exactly $\beta = 1$ we had the same intuition that low values of $\beta$ may still be beneficial. Interpolating values of $\beta$ sometimes appear to achieve the best performance (see e.g. ResNet-18 in Fig. 6 left and CNN in Fig. 14) but often this is not the case (e.g. Fig. 15). We hope that, building upon our findings, future work will be able to provide methods for optimizing for higher levels of diversity whilst avoiding learner collusion and thereby enable us to study the effects of increasing diversity _without_ this obscuring degeneracy.
$~$
**Action taken**: We have added a succinct summary of this discussion to the main text. We have also added this new plot and refined the discussion in App D.
$~$
* _Discussion in improved methods based on our findings_ - We agree that our work could benefit from some additional discussion on future methodological work building upon ours. This is something we are happy to expand upon in our conclusion. Reviewer pZUZ suggested randomly dropping a subset of base learners during training. The hypothesis being that if we drop a sufficiently large portion of base learners in, say, each batch but still perform joint training on the remainder, this should cause inflated diversity to be actively harmful even on the training data and therefore force the ensemble to avoid this degeneracy. We decided to try out this idea which we describe next.
$~$
We repeat the setup on CIFAR-10 with ResNet-18 architecture as described in the paper. We train each model using the joint objective, but at each batch we drop a proportion $p \in [0, 0.2, 0.4, 0.6]$ of randomly selected learners from the ensemble. We then investigate if this (a) reduces collusion and (b) improves ensemble performance. **The results of this experiment are included in the attached pdf**. We find that dropping learners does indeed significantly reduce the diversity score indicating a reduction in learner collusion (although a reasonably large proportion of learners is required to be dropped). Unfortunately, the improvement in performance by reducing collusion is negated by a decrease in individual performance due to the base learners becoming weaker on average. The reason why is apparent once we notice that by dropping a base learner from some proportion of batches, this is exactly equivalent to bootstrapping. While bootstrapping has been effective for ensembles of weak learners (e.g. random forest) it has already been shown to be harmful for deep ensembles [1]. Unfortunately, despite being a sensible idea at the outset, this resolution can only reduce the effect of learner collusion by simultaneously harming the performance of the ensemble due to bootstrapping.
$~$
Proposing methods to overcome learner collusion is a valuable but non-trivial direction for future work that we and, we hope, others in the research community will pursue. We believe that the comprehensive diagnosis of the issue we have presented in this work will provide a foundation for future methodological progress.
$~$
**Action taken**: We have added the results of this experiment as a negative result in order to aid future research and expanded upon our discussion of future work in the conclusion.
$~$
[1] Nixon, Jeremy, Balaji Lakshminarayanan, and Dustin Tran. "Why are bootstrapped deep ensembles not better?." ''I Can't Believe It's Not Better!''NeurIPS 2020 workshop. 2020.
$~$
* _Facebook dataset_ - Please see our response in the general comment section.
---
Rebuttal Comment 1.1:
Title: @a4xj: Please engage with rebuttal
Comment: The authors have posted their rebuttal to your review—does it affect your opinion? It'd be helpful if you can at least acknowledge having read the rebuttal, even if you don't find it convincing. Thanks!
---
Rebuttal Comment 1.2:
Title: Great rebuttal.
Comment: Thank you for your clarifications. I read the discussions with the other reviewers too. I would like to see the paper accepted. | Summary: Ensembles are a simple but powerful way to improve model performance. Typically, ensembles are used to improve performance by training each model independently and then using them jointly. However, unlike previous ML methods that require ensemble members to be trained individually, Deep Ensemble, an ensemble of deep learning models, can also be trained jointly. Considering that joint performance is the true objective of Deep Ensemble, it seems more natural to train jointly rather than individually. However, in practice, joint training leads to poor performance and poor generalization. The authors show theoretically and empirically that this is due to ensemble diversity. From this, they propose an augmented objective $\mathcal{L}^\beta$ that allows for a linear interpolation between joint and independent training. Finally, they hypothesize that this is due to learner collusion, a phenomenon where each model has a bias when jointly training an ensemble, and show through experiments that this is indeed the case.
Strengths: - The notation is well organized and the formulas are easy to follow.
- The problems covered in previous studies are well summarized and theoretically explained in Sections 2, 4, and 5.
- The reason why joint training of Deep Ensemble fails is effectively explained through *learner collusion* and is well supported experimentally.
Weaknesses: - The authors do not propose a way to prevent learner collusion while doing joint training, so there is not much contribution in terms of practicality. The possibility of partial joint training by weighting between independent training and joint training using an augmented objective is presented as a future work, which is already analyzed in various ways in Webb et al. (2020).
- (Minor) The overall formatting of the paper is less polished and a bit difficult to read.
- Figures are scattered throughout the page. If possible, the figure should be located at the top of the page.
- The text in the figure is too small to read the results. Increase the size of the text so that it is not too different from the size of the main text.
- The lack of titles in the references makes it very difficult to identify the relevant papers. **Please fix this.**
-----
(Webb et al., 2020) [To Ensemble or Not Ensemble: When does End-To-End Training Fail?](https://arxiv.org/abs/1902.04422)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Do you think it would be beneficial to drop a different fraction of base learners each step during the joint training of the ensemble to avoid learner codependency?
- Minor comments:
- As mentioned above, the formatting of the reference is incorrect. At least the title of the paper should be visible.
- NeurIPS style rules do not allow vertical lines in tables. I would recommend removing the background color as well.
- Overall, I like the paper, but you spend too much space organizing and formalizing previous research; I think it would be better to add the experiments in the appendix to the main text.
- In Figure 6, it looks better to add the ImageNet title to the two graphs on the right for consistency.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I see no potential negative societal impact from this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank this reviewer for their constructive feedback. We have addressed the points raised below which have helped us to further improve our submissions clarity and contribution.
$~$
* _Practical contribution_ - While we appreciate that the focus of this work was not on proposing new methods, we do believe that identifying and characterizing the limitations of joint training will still have a significant practical impact. We have already discussed several previous works which have directly attempted joint training (see Sec. 2 “joint training” & a deeper exposition of the limitations of the analyses of these works in App E) but it is likely that this issue is regularly unknowingly rediscovered by practitioners as joint training is a natural objective to consider. For example: (a) [1] reports training an ensemble sequentially due to “unstable” training dynamics when training simultaneously, (b) [2] introduces a regularization term to prevent their ensemble from “collapsing to degenerate solutions” and (c) [3] which argues that training for both loss and diversity on the same data may "render the convergence point of the training process uncontrollable". We hope that, at a minimum, our work can act as a canonical reference for this issue such that future works might avoid this common pitfall.
$~$
Regarding the augmented objective, we would point out that our objective is more general than that of Webb et. al as it is defined for all twice or more differentiable loss functions. However, we agree that as a proposed solution this would be similar in spirit. In our work, we intended the augmented objective to primarily be a tool for analysis of the limitations of joint training rather than a complete solution. We do hope that the result indicating that low levels of $\beta$ still performing reasonably well will inspire future methods that can overcome the limitations we have described.
$~$
**Action Taken**: We have added a note on the consequences of learner collusion to future ensemble research in our conclusion.
$~$
[1] Pagliardini, Matteo, et al. "Agree to Disagree: Diversity through Disagreement for Better Transferability." The Eleventh International Conference on Learning Representations. 2022.
[2] Lee, Yoonho, Huaxiu Yao, and Chelsea Finn. "Diversify and disambiguate: Out-of-distribution robustness via disagreement." The Eleventh International Conference on Learning Representations. 2022.
[3] Pang, Tianyu, et al. "Improving adversarial robustness via promoting ensemble diversity." International Conference on Machine Learning. PMLR, 2019.
$~$
* _dropping a fraction of base learners each step during the joint training_ - We agree that this is a very natural solution to learner collusion, thank you for the suggestion! The hypothesis being that if we drop a sufficiently large portion of base learners in, say, each batch but still perform joint training on the remainder, this should cause inflated diversity to be actively harmful even on the training data and therefore force the ensemble to avoid this degeneracy. We decided to try out this idea which we describe next.
$~$
We repeat the setup on CIFAR-10 with ResNet-18 architecture as described in the paper. We train each model using the joint objective, but at each batch we drop a proportion $p \in [0, 0.2, 0.4, 0.6]$ of randomly selected learners from the ensemble. We then investigate if this (a) reduces collusion and (b) improves ensemble performance. **The results of this experiment are included in the attached pdf**. We find that dropping learners does indeed significantly reduce the diversity score indicating a reduction in learner collusion (although a reasonably large proportion of learners is required to be dropped). Unfortunately, the improvement in performance by reducing collusion is negated by a decrease in individual performance due to the base learners becoming weaker on average. The reason why is apparent once we notice that by dropping a base learner from some proportion of batches, this is exactly equivalent to bootstrapping. While bootstrapping has been effective for ensembles of weak learners (e.g. random forest) it has already been shown to be harmful for deep ensembles [4]. Unfortunately, despite being a sensible idea, this resolution can only reduce the effect of learner collusion by simultaneously harming the performance of the ensemble due to bootstrapping.
$~$
Proposing methods to overcome learner collusion is a valuable but non-trivial direction for future work that we and, we hope, others in the research community will pursue. We believe that the comprehensive diagnosis of the issue we have presented in this work will provide a foundation for future methodological progress.
$~$
**Action taken**: We have added the results of this experiment as a negative result in order to aid future research.
$~$
[4] Nixon, Jeremy, Balaji Lakshminarayanan, and Dustin Tran. "Why are bootstrapped deep ensembles not better?." ''I Can't Believe It's Not Better!''NeurIPS 2020 workshop. 2020.
$~$
* _Formatting issues_ - We wish to sincerely apologize to all four reviewers for a (silent) latex compiling error in the final version of our manuscript that resulted in the titles of the papers not appearing in the PDF. This was an unfortunate inconvenience for the reviewers. We do wish to highlight that several reviewers made positive comments about the writing and organization of this work – thus, the issues regarding the presentation are limited to simple **formatting errors which we have now rectified**. Furthermore, we have implemented the further formatting suggestions from this reviewer and, should the paper be accepted, would ensure careful attention is paid during further polishing of the manuscript for a camera-ready version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the answers and additional experiments. I find this finding very interesting and think it will be helpful for future research. I have decided to raise my review score.
---
Reply to Comment 1.1.1:
Comment: We are delighted we were able to address this reviewer's concerns and appreciate their positive conclusion. We thank them for a constructive review process which helped improve our paper's clarity and contribution. | Summary: This paper explores the joint training of deep ensembles, wherein the ensemble error is directly optimized during training. The authors find that joint training leads to poor performance, which they posit it due to phenomenon they call "learner collusion", where base learners artificially inflate their diversity, at the expense of test performance. The authors provide a new decomposition of a broad class of loss functions in terms of the average error rate, the ensemble error rate, and a generalized notion of diversity. The authors then perform an experimental evaluation into the prevalence of learner collusion during joint training, using an augmented objective function to probe the effect of explicitly encouraging diversity during training.
Strengths: Overall, the paper is very well-written and easy to follow, and in my view the topic is very relevant to the community.
To me, the most compelling experimental evidence for the learner collusion phenomenon are those presented under "learner codependence"/in Figure 4. From these results it is very clear that something distinct is happening in the joint training regime, and that the individual models are exploitating the joint loss. The definition of the augmented objective $\mathcal{L}^\beta$ seems like a very useful tool for probing these and other related phenomena.
Weaknesses: Unless I am misunderstanding, the conclusions drawn from Figure 2 do not seem to strongly support the claim that joint training significantly harms ensemble performance. Indeed, for the Facebook task, it seems the method actually helps, while for others it does not. This is in contrast to the CIFAR results in Table 1/Figure 6, where the degredation in performance is significant. Could the authors provide some clarification as the intended conclusion from these experiments?
There are a number of other papers in the literature that study some form of the gap (in the authors' notation) $\bar{\text{ERR}} - \text{ERR}$, and express and/or bound it in terms of various forms of "diversity", depending on the given loss function. For example Ortega et al., 2022, Abe et al., 2022 and Masegosa et al., 2020 give explicit expressions for the average - ensemble error gap in terms of diversity metrics that are easily interpretable. It's unclear to me that the form in Theorem 4.5 adds much to this literature, and indeed the stated expression for diversity doesn't seem to be used in the remainder of the paper.
Overall, while I think the paper is well-written and the topic is relevant, I think there is 1) insufficient use/novelty in the theoretical results and 2) somewhat incomplete conclusions to be drawn from the empirical results to recommend accepting at the the current stage. In particular, while it certainly appears like some type of "learner collusion" is happening, it seems very important to understand why this only appears at $\beta=1$, and exactly what impact this could have on the ensemble error.
As a minor point, there seems to be an issue with the references -- they don't seem to contain any titles of the referred papers.
**References:**
Taiga Abe, E Kelly Buchanan, Geoff Pleiss, and John Patrick Cunningham. The best deep
ensembles sacrifice predictive diversity, 2022.
Luis A. Ortega, Rafael Cabañas, and Andres Masegosa. Diversity and generalization in neural
network ensembles, 2022.
Andres Masegosa, Stephan Lorenzen, Christian Igel, and Yevgeny Seldin. Second order PAC-
Bayesian bounds for the weighted majority vote, 2020.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I find the distinct change in behavior of the various metrics at $\beta=1$ very interesting; it seems the behavior is relatively benign for $\beta < 1$, and yet there appears to be a distinct transition at $\beta = 1$. Do the authors have any hypotheses as to why this occurs?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank this reviewer for their constructive feedback. We have addressed the points raised below which have helped us to further improve our submissions clarity and contribution.
$~$
* _Facebook dataset & Formatting_ - Please see our response in the general comment section.
$~$
* _On Thm. 4.5_ - We apologize for not sufficiently emphasizing the significance of this result. This theorem is an essential ingredient as it demonstrates that, for any at least twice differentiable loss function, the ensemble loss can be decomposed into the aggregate individual loss and a term that captures ensemble diversity. This generalizes specific examples of this decomposition (e.g. Krogh and Vedelsby, 1994) and unifies any analysis of joint training to practically any loss function. Indeed, we do not manually calculate diversity using the Hessian as it is more efficiently calculated as $\bar{ERR} - ERR$, but having this theorem ensures that the term we obtain from this calculation can always be interpreted as ensemble diversity. Furthermore, this theorem also guarantees that for positive semi-definite losses (e.g. MSE, Cross-Entropy), the diversity is lower bounded by zero – an important property for a sensible definition of diversity. We emphasize that without this theorem we could only make conclusions about specific loss functions whilst with it we can discuss diversity and joint training in the general setting – resulting in us being the first paper to do so.
$~$
**Action taken**: We have added further context on the significance of this result in the text.
$~$
* _Distinct transition at $\beta = 1$_ - This is indeed a notable phenomenon that we only addressed in the appendix. Let us provide a more succinct discussion on this point.
$~$
Figures 2-4 do highlight a sudden jump in learner collusion as $\beta \to 1$ in the case of regression rather than a gradual increase. Why does this occur? A useful explanation lies in the gradient-adjusted target (GAT) analysis in App D. To briefly summarize, this section considers applying individual training but with an adjusted target which is a function of the gradient of the ensemble error. Specifically, we adjust the targets to account for the errors of the ensemble. Formally, we optimize $\frac{1}{M}\sum_{j=1}^M (f_j - \bar{y})^2$ where $\bar{y}$ is the adjusted target given by $\bar{y} = y - \alpha \cdot g$. Here $g$ is the ensemble loss gradient and $\alpha$ is a step size parameter determining how big a step the target of an individual learner should take in order to account for the errors of the ensemble. In Thm. D.1 we show that this GAT objective is exactly equivalent to our augmented objective in Sec. 5 with $\beta = (1 - \frac{1}{(1 + \alpha)^2})$. Given this relationship, we might notice that the gradient step $\alpha$ grows exponentially as $\beta \to 1$ resulting in each individual learner having their targets biased excessively to account for the ensemble errors (**please see the new figure in the attached PDF visualizing this relationship**). Given this perspective, it is unsurprising that learner collusion does not grow linearly with $\beta$.
$~$
**Action taken**: We have added a succinct summary of this point in the main text. We have also added this new plot and refined the discussion in App D.
$~$
* _Related works_ - Thank you for raising this point, we hope that extending our discussion to these works will further highlight our contribution. Firstly, we wish to reiterate that our work investigates the direct optimization of the loss of the ensemble which we refer to as joint training. Given our findings (i.e. learner collusion), it is not surprising that there have been several works that attempt to improve ensemble performance by other means (i.e. proposing alternative methods of encouraging diversity). We categorized this literature as “ensemble-aware individual training” in our background section. We believe that our investigation of the underlying issue with directly optimizing the ensemble will provide an important basis for future methods of this type.
$~$
Masegosa et al. (2020) is one such example in which the authors propose to optimize a PAC-Bayesian generalization bound in the case of cross-entropy. Whilst conceptually appealing, optimizing the bounds of model performance is generally not a standard approach in machine learning as it is unclear how minimizing a worst-case bound or maximizing a best-case bound directly affects the actual generalization performance under the metric of interest of a model. However, given the limitations we have identified in directly optimizing that metric of interest in our work, this approach might offer an effective alternative providing a fruitful direction for future work. Then Ortega et al. (2022) extended this approach to additional loss functions, analytical analysis and a more practical empirical evaluation. We hope that our contribution here will provide a useful foundation for alternative ensemble training approaches such as these.
$~$
The recent workshop paper of Abe et al. (2022) is indeed relevant as it considers decomposing the objective into individual loss and diversity for two specific cases (MSE and CE with probability averaging) and provides a useful motivation for our investigation of joint training. Whilst this work does provide preliminary evidence of poor performance under joint training (an empirical observation that was also reported and misdiagnosed previously in the literature as discussed extensively in our related work and App. E), it does not investigate why this occurs - the primary contribution of our work. Furthermore, all of the contributions of our work (as summarized in Fig 7), are entirely novel with respect to this work.
$~$
**Action taken**: We have integrated these works into our related work discussion, thank you for encouraging us to extend our literature review.
---
Rebuttal Comment 1.1:
Title: @m1Sy: Please engage with rebuttal
Comment: The authors have posted their rebuttal to your review—does it affect your opinion? It'd be helpful if you can at least acknowledge having read the rebuttal, even if you don't find it convincing. Thanks! | Summary: This paper mainly studies the reason behind the failure of jointly training deep ensembles. It discovers that joint optimization results in a phenomenon in which base learners collude to artificially inflate their apparent diversity. Both theoretical and empirical evidence are provided further to verify the hypothesis.
Strengths: (1) This paper attempts to study a seemingly under-explored question, that is, why joint training of ensembles fails to generalize better than individual training and ensemble. This research topic is very important and helps us to understand the foundations of the deep ensemble.
(2) Both theoretical proof and empirical proof are provided.
Weaknesses: (1) One big issue of this paper is that all references do not contain titles, which to be honest, I have never seen in top-tier conferences like NeurIPS.
(2) I do not understand why joint training is easier to train but results in poor generalization. What does "easier to train" refer to?
(3) What joint training approaches are used in the paper?
(4) The empirical evaluation isn't very convincing to me. For instance, how many ensemble members are we using for "Learner codependence"? It is not surprising that the performance degrades to me if we drop ensemble members during testing, especially if our overall ensemble members is not sufficient.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: Please see my above weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: N/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank this reviewer for their constructive feedback. We have addressed the points raised below which have helped us to further improve our submissions clarity and contribution.
$~$
* _What joint training approaches are used_ - Joint training refers to the case when the aggregated predictions of the entire ensemble are optimized directly. Formally (using the notation from our paper): given an ensemble of learners $f_1, \ldots, f_M$, we might consider optimizing the _joint objective_ $\mathcal{L}(\frac{1}{M} \sum_{i=1}^{M} f_i , y)$ ($= \text{ERR}$) where $\mathcal{L}$ is a given loss function such as mean squared error or negative log-likelihood. This is in contrast to the commonly observed _independent training_ where each ensemble member is optimized directly resulting in an optimization objective given by $\frac{1}{M} \sum_{i=1}^{M} \mathcal{L}(f_i , y)$ ($= \bar{\text{ERR}}$).
$~$
**Action taken**: This definition is an important foundation for the contributions of our paper. We have therefore added this formal definition earlier to the introduction section to ensure it is established up front and without ambiguity.
$~$
* _What does "easier to train" refer to?_ - We did not make any claims regarding any method being “easier to train”. In fact, this quote does not appear in our paper. A point that we did make (e.g. L30-38) is that – despite the joint objective being the “true objective of interest” – throughout the literature, we typically observe the ensemble being trained independently in practice (occasionally with some regularization on the ensemble). Again, let us make this point more formally using the same notation as the previous point. The key question our paper addresses is: given that when we evaluate the performance of an ensemble (e.g. at test time or in production) using the loss term $\text{ERR}$, why do we not optimize for that objective during training rather than the commonly observed proxy of $\bar{\text{ERR}}$? Of course, the conclusion is that this is for good reason: the joint training of deep ensembles (i.e. optimizing $\text{ERR}$) fails due to learner collusion.
$~$
**Action taken**: We have integrated this point more formally into the introduction as part of the additional formalism from our previous action point. Thank you for encouraging us to further improve our clarity.
$~$
* _Empirical evaluation_ - We agree that extending the empirical evaluation of the practical limitations of joint training can further strengthen this work. We have therefore **extensively broadened our evaluation on ImageNet** (from Table 1 and Figure 6 RHS) with eight additional architectures and various ensemble sizes. All models are trained from scratch and **we include these new results in the attached PDF document** (Table 5) where we consistently find that joint training performs significantly worse than independent training, thus reinforcing the claims of our work.
$~$
**Action taken**: We appreciate this suggestion and have updated our manuscript by including these additional results in Sec. 3.
$~$
* _It is not surprising that the performance degrades to me if we drop ensemble members during testing (regarding Figure 4)_ - Indeed, it is correct to assume that dropping ensemble members at test time will likely result in a degradation in performance. Our intention in this experiment was to analyze the extent of that degradation for different training methods. If joint training results in learner collusion as we hypothesize, we would expect that the individual learners would become more codependent in that setting. To illustrate this, consider the trivial case of two identical learners that reasonably accurately predict the label $f_1 = f_2 \approx y$. Should they collude by biasing their regression predictions in opposing directions by some constant $k \in \mathbb{R}$ (i.e. $f_1^\text{bias} = f_1 + k$ and $f_2^\text{bias} = f_2 - k$), this would artificially inflate diversity ($\text{DIV} = \frac{1}{2} \sum_{j=1}^2(\bar{f} - f_j^\text{bias})^2 = k^2$) but result in a codependence such that dropping either learner at test time would be catastrophic to the ensemble performance as the remaining learner would be highly biased by exactly the term $k$. If these learners were independently trained and, therefore, not colluding, this bias can not be learned and the resulting drop in performance should be significantly less. In Figure 4 this is exactly what we observe. For all values of $\beta$ we consider the relative increase in test loss upon dropping a subset of the learners at test time. As $\beta \to 1$ (i.e. as we approach joint training) we observe a very large increase in that test error while elsewhere the relative increase in error is far more modest.
$~$
**Action taken**: We have added a short interpretation of the results of this experiment directly in the caption of Figure 4. Again, thank you for encouraging us to further clarify exposition.
$~$
* _Formatting issue_ - We wish to sincerely apologize to all four reviewers for a (silent) latex compiling error in the final version of our manuscript that resulted in the titles of the papers not appearing in the PDF. This was an unfortunate inconvenience for the reviewers. We do wish to highlight that several reviewers made positive comments about the writing and organization of this work – thus, the issues regarding presentation are limited to simple **formatting errors which we have now rectified**. Should the paper be accepted, we would ensure careful attention is paid during further polishing of the manuscript for a camera-ready version.
---
Rebuttal Comment 1.1:
Title: @XMTw: Please engage with rebuttal
Comment: The authors have posted their rebuttal to your review—does it affect your opinion? It'd be helpful if you can at least acknowledge having read the rebuttal, even if you don't find it convincing. Thanks!
---
Rebuttal Comment 1.2:
Title: Thanks for the response
Comment: I thank the authors for the detailed response. It addressed some of my concerns. I believe the empirical results can contribute to the community. However, I believe the primary finding of this paper, i.e., jointly trained ensemble leads to learner Collusion is somehow trivial and straightforward. It is natural for me that the jointly trained ensemble leads to collusion without adding any regularization. Therefore, I would like to increase my score to 5, not 6.
---
Reply to Comment 1.2.1:
Comment: We thank this reviewer for their response and are glad that they have determined that the paper should be accepted. We agree that their suggested extension to our empirical results reinforces the claims of our work.
Let us briefly address the point that our findings are "*somehow trivial and straightforward*" as this was not raised in the original review (indeed, there this reviewer stated that "*this research topic is very important and helps us to understand the foundations of the deep ensemble*"). Our goal was to present this phenomenon as clearly and intuitively as possible and we are glad it was perceived as such. However, despite being a natural explanation in retrospect, the limitations of jointly training ensembles has been a **reoccurring issue** throughout the literature and, in the rare cases where they have been investigated, they have been **misdiagnosed**. We therefore suggest that a clear investigation of the issue, as provided by this work, is a worthwhile contribution to the literature which can (a) prevent the need to constantly rediscover this degeneracy, (b) amend the previous literature, and (c) guide future research into the optimization of deep ensembles.
$~$
* **Reoccurring issue** - It is likely that this issue is regularly and unknowingly rediscovered by practitioners as joint training is a natural objective to consider. For example: (a) [1] reports training an ensemble sequentially due to “unstable” training dynamics when training simultaneously, (b) [2] introduces a regularization term to prevent their ensemble from “collapsing to degenerate solutions", (c) [3] states that training for both loss and diversity on the same data may "render the convergence point of the training process uncontrollable", and (d) [4] mentions that joint training "reduces the accuracy of the ensemble and can easily lead to training instabilities". We hope that, at a minimum, our work can act as a canonical reference for this issue such that future works might avoid this common pitfall.
* **Misdiagnosed** - A small number of works have previously hypothesized about this problem. We addressed these works in Sec. 2 “joint training” with a deeper analysis in App E. The key takeaway is that the existing attempts to address this issue are insufficient for explaining the general phenomenon or, in some cases, are objectively incorrect (e.g. [5] reported successful joint training under SoftMax averaging which was discovered to be a coding bug that resulted in unintentionally implementing independent training instead). Although this degeneracy might appear trivial to this reviewer, we believe that there has been sufficient confusion in the literature to warrant our comprehensive analysis.
$~$
[1] Pagliardini, Matteo, et al. "Agree to Disagree: Diversity through Disagreement for Better Transferability." The Eleventh International Conference on Learning Representations. 2022.
[2] Lee, Yoonho, Huaxiu Yao, and Chelsea Finn. "Diversify and disambiguate: Out-of-distribution robustness via disagreement." The Eleventh International Conference on Learning Representations. 2022.
[3] Pang, Tianyu, et al. "Improving adversarial robustness via promoting ensemble diversity." International Conference on Machine Learning. PMLR, 2019.
[4] Mehrtens, Hendrik Alexander, Camila Gonzalez, and Anirban Mukhopadhyay. "Improving robustness and calibration in ensembles with diversity regularization." DAGM German Conference on Pattern Recognition. Cham: Springer International Publishing, 2022.
[5] Dutt, Anuvabh, Denis Pellerin, and Georges Quénot. "Coupled ensembles of neural networks." Neurocomputing 396 (2020): 346-357. | Rebuttal 1:
Rebuttal: We thank all four reviewers for their constructive feedback. We have found their feedback to be instructive with their suggestions and questions helping us to further improve our submissions clarity and contribution.
$~$
* _Formatting error_ - We wish to sincerely apologize to all four reviewers for a (silent) latex compiling error in the final version of our manuscript that resulted in the titles of the papers not appearing in the PDF. This was an unfortunate inconvenience for the reviewers. We do wish to highlight that several reviewers made positive comments about the writing and organization of this work – thus, the issues regarding the presentation are limited to **simple formatting errors which we have now rectified**. Furthermore, we have implemented further formatting suggestions from pZUZ and, should the paper be accepted, would ensure careful attention is paid during further polishing of the manuscript for a camera-ready version.
$~$
* _Clarification on Figure 2 (Facebook dataset)_ - We thank reviewers **m1Sy** & **a4xj** who highlighted that it would be beneficial to comment on this result, we certainly agree that this particular result requires some further clarification. Unlike all other experiments throughout this paper, the test error on this task is relatively flat across all values of $\beta$ (after accounting for the standard errors). Therefore, one might ask whether this result contradicts any of our claims.
$~$
We begin by revisiting our analytical analysis of the regression case in App F. While this section mathematically uncovers how optimizing for diversity results in diversity-inflating bias terms, it does not guarantee that it will result in worse test performance. It is at least conceptually possible to construct a task in which diversity is not desirable and, therefore, learner collusion is not harmful. Further analysis indicates that this dataset is such a case (we also highlight that this is quite a popular regression task in the machine learning literature e.g. [1-3]). The goal in this task is to predict the number of impressions a post will obtain given several metrics. While this is a challenging task due to high variance in the response, the signal that does exist seems to be almost entirely contained within a single variable: “number of likes” resulting in no need for true diversity and, therefore, no substantial impact due to learner collusion. We verify this claim by matching the reported test performance with a single decision tree of depth 3 which (a) matches the performance of a neural network (MSE = 2.18 $\pm$ 0.66 over 5 runs) and (b) has feature importance dominated by this single “number of likes” variable (in all 5 runs the tree uses this feature as its primary split).
$~$
Finally, we note that this dataset is still valuable as (1) this experiment was investigating the existence of learner collusion – which certainly still occurs on this task, and (2) selective inclusion of datasets based on nice characteristics should be avoided – indeed, we hope that this result and our accompanying clarification may be instructive for future readers. Readers might also be interested to note that we have **extensively broadened our evaluation on ImageNet** (from Table 1 and Figure 6 RHS) with eight additional architectures and various ensemble sizes. All models are trained from scratch and **we include these new results in the attached PDF document** (Table 5) where we consistently find that joint training performs significantly worse than independent training, thus reinforcing the claims of our work.
$~$
**Action taken**: We have added a comment to the paper clarifying this result and significantly extended our experiments on ImageNet.
$~$
[1] Romano, Yaniv, Evan Patterson, and Emmanuel Candes. "Conformalized quantile regression." Advances in neural information processing systems 32 (2019).
[2] Sesia, Matteo, and Yaniv Romano. "Conformal prediction using conditional histograms." Advances in Neural Information Processing Systems 34 (2021): 6304-6315.
[3] Jeffares, Alan, et al. "TANGOS: Regularizing Tabular Neural Networks through Gradient Orthogonalization and Specialization." The Eleventh International Conference on Learning Representations. 2022.
$~$
Pdf: /pdf/1ed9b50414f932defeb43f34caa446cf54a538a5.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Approximating Nash Equilibria in Normal-Form Games via Unbiased Stochastic Optimization | Reject | Summary: This paper formulates a Lipschitz loss function that makes computing approximate interior Nash equilibria in normal-form games amenable to unbiased Monte Carlo estimation, opening the door to using a number of scalable stochastic optimization techniques. They also provide a loss function with similar properties but under the notion of quantal-response equilibrium (QRE). The authors also provide certain illustrative experiments to support their claims.
Strengths: This paper provides a novel approach to computing equilibria in multi-player games. In particular, the authors derive loss functions that make equilibrium computation amenable to scalable methods from stochastic optimization. Given the lack of scalable algorithms for computing solutions concepts such as the Nash equilibrium, this is a promising approach, and has the potential to bring many new insights to equilibrium computation. In particular, the idea proposed for deriving an unbiased estimator (Section 4.4) is interesting, and addresses many of the pitfalls of other commonly used loss functions in the more challenging constrained setting. Furthermore, the presentation and the writing are overall clear, and the authors accurately place their results into the existing literature.
Weaknesses: There are a number of issues that weaken the contribution of the paper. First, the underlying assumption that there is an interior Nash equilibrium is very strong. For one, if there is an interior NE it is known that it can be computed in polynomial time via linear programming, which significantly weakens the motivation regarding the hardness of NE. There appears to be some confusion regarding this point. For example, Corollary 3 in the Appendix claims a new FPTAS for computing interior NE in polymatrix games, which I believe is known (beyond polymatrix games); there is perhaps still some benefit in using the proposed methodology in practice, but no evidence of that is provided in the paper. (As an aside, it would be helpful to clarify in the prelimaries that by interior you mean relative interior.) Beyond the very restrictive assumption of having an interior Nash equilibrium, the authors provide similar results for QRE, but that is a significantly weaker equilibrium concept. I would also strongly recommend clarifying in the abstract that your results apply for interior NE; as it is written currently it is very misleading.
Besides the issue mentioned above, there is an underlying premise in the proposed methodology which I find unconvincing: Why should we expect local optima in the formulated loss functions to give meaningful guarantees? The fact that this turned out to be the case in many ML applications is not enough to justify this proposition. It is a significant weakness that the proposed method has no theoretical finite-time guarantees of reaching a Nash equilibrium.
And unfortunately the experiments do not offer enough evidence to support this approach. Indeed, there are many issues in the experiments that can be significantly improved. First, the games experimented on are overly small; for example, Shapley's game is completely toy, no meaningful conclusions can be drawn from it. Since the main message of this paper is about scalability, I expected to see experiments on much bigger games. It would be helpful if the proposed theory applied in extensive-form games for which there are many large benchmark games in the literature, but the current method is tailored to normal-form games. Besides this issue, I am very confused regarding the compared benchmarks. It is claimed in the last sentence of the abstract that the method often outperforms prior state of the art, but the main algorithms compared against are RM and FTRL. These algorithms will not even find an NE in finite-time, how can you claim that those are the state of the art? In particular, in Lines 966-969 it is claimed that those are the two most popular scalable stochastic algorithms for approximating NE; I strongly disagree with this claim. I would suggest trying some other benchmarks, such as the Lemke-Howson algorithm; a mixed-integer programming approach; or the algorithm presented in "Exclusion Method for Finding Nash Equilibrium in Multiplayer Games."
Another issue is on the proof of Corollary 1. For a constant $\epsilon$, it is claimed that you have a poly-time algorithm (since it is a PRAS), but you also say that the temperature parameter has to be exponentially small to achieve that. So it seems that even for a constant $\epsilon$ you need an exponential number of iterations to converge.
Overall, although the approach proposed is promising, there are a number of issues that have to be addressed before the paper is ready for publication.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Some minor issues:
1. There are many missing punctuation marks in the equations throughout the paper
2. There are many overfull equations in the Appendix; I would recommend fixing those in the revised version
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and your encouraging statements. We believe we can address your concerns by clearing up a few misunderstandings and reporting the results of some additional experiments in accordance with your feedback. We hope you will consider increasing your score in light of these updates.
**Interior (Fully-mixed) Assumption**: We do **not** assume there exists a Nash equilibrium in the interior of the simplex. That assumption only exists for the “warm-up” to aid the reader in our derivation of an appropriate loss function. Starting in Section 4.5, this assumption is removed and Lemma 17 expresses a key result that connects our loss function back to the standard $\epsilon$-Nash definition for any approximate equilibrium (including pure equilibria).
**Local Optima**: We **agree with you** that local (suboptimal) optima could be a problem, and this is why we directly explore this point empirically. Our critical point experiments as showcased in Figure 2 are meant to study the frequency of encountering these saddle points / local minima in practice. Interestingly enough, although suboptimal saddle points are prevalent, the more critical case of local minima seems rarer.
The critical part of Figure 2 is the top left corner. This corresponds to local minima of our loss function (no descent directions exist) that have relatively high levels of exploitability. This part of the figure shows that only one of our games (2p Sheriff) has several such bad local optima. But even for this game, as the first column shows, the vast majority of local optima have very low exploitability. In all other games, high exploitability is strongly correlated with large index and thus SGD should perform well.
**Theoretical Finite Time Guarantees**: The primary theorem (Theorem 1) in our paper concerns a globally convergent stochastic non-convex optimization algorithm (BLiN) applied to our loss. This theorem expresses a finite time convergence rate to a **global, not local** optimum.
**Insufficient Experiments (Toy Games and Missing Baselines)**: We experiment on games much larger than Shapley’s. Our largest Blotto game examined in Figure 3 contains $4 \cdot 66^4 > 75,000,000$ payoff entries. As far as NFGs go, these are quite large, and as we’ll demonstrate below, present a challenge for classical solvers. We also agree that extending our approach to EFGs is quite interesting, but is out of scope for the current paper. Note that in our experiments, we measure exploitability of an approximate equilibrium exactly. If the game is too large, this measurement becomes intractable by nature of a sum over an exponential $nm^n$ number of payoff entries. In this case, to our knowledge, our loss (which requires only $2nm$ lookups to evaluate) presents the only option for estimating an unbiased upper bound on $\epsilon$.
Lemke-Howson only applies to 2-player games, however, Govinda and Wilson developed a method that is now recognized as its counterpart for 3+ player games. We ran this algorithm (*gambit-gnm*) and several others from the gambit library [1] (listed below) on both Blotto games. Only *gambit-enumpoly* and *gambit-enumpure* are able to return any NE for 3-player Blotto within a 1 hour time limit (and only pure equilibria). And only *gambit-enumpure* returns any NE for the 4-player game. Note we also test on the D7-Covariant game from the GAMUT benchmark set (Figure 3, second plot), which was revealed to be a particularly challenging game to solve for a set of classical methods (including Govinda Wilson, simplicial subdivision, and CSP-style approaches) in the paper by Porter, Nudelman, and Shoham [2].
Thank you for sharing the paper “*Exclusion Method for Finding Nash Equilibrium in Multiplayer Games*”! In fact, BLiN applied to our loss can be thought of as a stochastic generalization of this method (see Figure 1 of X-armed Bandits [8] and Figure 1 of BLiN [14] for nice visuals of the X-armed bandit family approach that parallel the *exclusion method*). In the paper you cite, the regret calculated in their definition (1) is exactly the $\epsilon$ that we have developed an unbiased estimate for.
Lastly, we consider the previous state-of-the-art to be ADIDAS [15] (denoted by $^yQRE^{auto}$ in the legend); note that [15] similarly includes negative results from running gambit on Blotto (see appendix H.2). Again, thank you for reading the appendix. FTRL and RM are popular methods that researchers often resort to [3, 4] because so few algorithms currently scale to large many player games.
**PRAS Designation**:
Thank you for catching this! The phrasing in the appendix is not perfectly accurate as the relevant temperature parameter is $\tau$ which is equal to $1/\ln(1/p)$. In this case, any exponential decrease for $p$ manifests as a linear decrease for $\tau$ due to the natural logarithm. We will make sure to resolve this subtlety in the appendix and make this clearer in the paper.
Thank you again for the careful reading of our paper. **Proposed edits will follow in another post (space limits).**
[1] McKelvey, Richard D., McLennan, Andrew M., and Turocy, Theodore L. (2016). Gambit: Software Tools for Game Theory, Version 16.0.1. http://www.gambit-project.org.
https://gambitproject.readthedocs.io/en/latest/tools.html
- gambit-enumpoly [73s 3-player, timeout 4-player]
- gambit-enumpure [72s 3-player, 45s 4-player]
- gambit-gnm
- gambit-ipa
- gambit-liap
- gambit-logit
- gambit-simpdiv
[2] Porter, Ryan, Eugene Nudelman, and Yoav Shoham. "Simple search methods for finding a Nash equilibrium." Games and Economic Behavior 63.2 (2008): 642-662.
[3] Bakhtin, Anton, et al. "Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning." The Eleventh International Conference on Learning Representations. 2022.
[4] Gray, Jonathan, et al. "Human-Level Performance in No-Press Diplomacy via Equilibrium Search." International Conference on Learning Representations. 2020.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. I am satisfied with your clarifications regarding the experimental evaluation.
The assumption of having fully mixed Nash equilibria still puzzles me; starting from Section 4.5 you are focusing on QRE not Nash equilibria. So based on your response it is fair to say that the main focus of this paper is on QRE and not Nash equilibria. If that is the case, then at the very least this has to be highlighted (for example, in the title, the abstract, and the introduction). This has a bearing on the evaluation of the paper since QRE is arguably a much less attractive solution concept, and is not what the paper promises based on the title and the abstract. Let me know if I am misunderstanding some point.
---
Reply to Comment 1.1.1:
Title: Solution Concept (Nash) vs Algorithm (Regularization and Homotopies)
Comment: Thank you for your quick response. We believe our discussion below can help clarify your concerns about the paper’s framing. To explain, we want to differentiate between the solution concept we are studying (Nash) and the algorithmic approach we take to approximate it (QREs at low temperature).
**QRE: An Algorithmic Approach To Computing Nash**: Even in much more restrictive classes of games (e.g., monotone games), introducing and then annealing strong regularizers is a traditional and practical approach to approximating Nash equilibria: Tikhonov regularization [1] (monotone), Friction FoReL [2] (monotone), ADIDAS [3] (non-monotone). In addition, McKelvey and Palfrey introduced the QRE solution concept and immediately used it to compute a Nash equilibrium by annealing the temperature of a QRE to zero [4]. Our point is that each of the approaches above regularize the game, thereby solving for an intermediate yet transient solution concept, in order to solve for the ultimately desired solution of Nash equilibrium. This is a well-established technique in the literature for designing algorithms with convergence to Nash.
**Our Focus: Nash Equilibrium**: Our paper focuses on constructing an algorithm with convergence guarantees to approximate Nash equilibria (as measured by exploitability $\epsilon$) in general-sum games. We demonstrate this focus both theoretically and empirically.
1) Lemma 17 establishes a result that allows us to upper bound the exploitability $\epsilon$ of a strategy profile $\boldsymbol{x}$ as a function of our loss (norms of entropy-regularized gradients). Lemma 17 explains how to approximate Nash equilibria given QRE as a stepping stone. QRE is not the final end goal.
2) Theorem 1 uses Lemma 17 in conjunction with non-convex optimization guarantees to provide a convergence rate to approximate Nash equilibria -- note we still measure our approximation error by $\epsilon$.
3) Lastly, experimental performance in Figure 3 is measured without entropy regularization. We’ll make sure to emphasize this in the updated version.
In summary, our theory and evaluation metrics use Nash exploitability as the yardstick (and not e.g., distance to a QRE).
We appreciate, and with hindsight, **agree** with your concern that readers might miss this difference (solution vs algorithm). We will make sure to emphasize this difference throughout the text explaining we approximate Nash equilibria (our solution concept of focus) by way of approximating QREs at vanishing temperature (our algorithmic approach).
Thank you again for bringing this to our attention.
[1] Facchinei, Francisco, and Jong-Shi Pang, eds. Finite-dimensional variational inequalities and complementarity problems. New York, NY: Springer New York, 2003. Page 1125.
[2] Perolat, Julien, et al. "From Poincaré recurrence to convergence in imperfect information games: Finding equilibrium via regularization." International Conference on Machine Learning. PMLR, 2021.
[3] Gemp, Ian, et al. "Sample-based Approximation of Nash in Large Many-Player Games via Gradient Descent." Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems. 2022.
[4] McKelvey, Richard D., and Thomas R. Palfrey. "Quantal response equilibria for normal form games." Games and economic behavior 10.1 (1995): 6-38. | Summary: This paper presents a novel approach for determining the Nash equilibrium of normal form games, utilizing a solution to a non-convex stochastic optimization problem. It defines the Nash equilibria in normal form games as the global minima of a specifically cunstructed loss function. Moreover, a randomized algorithm is developed to resolve this newly proposed loss function. Finally, empirical results further verify the theoretical analysis.
Strengths: Though the idea of loss function has been proposed before, this paper contributes to the discourse with several innovative insights that enhance the understanding and applicability of loss functions. For example, this paper restricts the parameter to the simplex, which is the key of making stochastic gradient unbiased.
Regarding the quality and clarity, this paper is sufficiently complete. It also provides clear backgrounds, which make it easy to understand how this loss function comes from. It is not completely new but it has something new.
Weaknesses: The motivation of this work is not sufficiently clear. I could understand solving a Nash equilibrium may not be efficient but I don't think proposing a NE solver via unbiased stochastic optimization will make it better.
It is unclear how this method is better than some existing NE solver such as Lemke–Howson algorithm.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I agree that there is a gap between the success of using SGD solving non-convex optimization problem and the failure of efficiently computing Nash equilibria. Why does this motivate the goal: "Can we solve for Nash equilibria via unbiased stochastic optimization"? To my understanding, solving a non-convex optimization problem is still very hard.
2. Solving a non-convex optimization problem may lead to a stationary point instead of the global minima. Why does this proposed method is better than using an existing NE solver such as Lemke–Howson algorithm to ensure obtaining a NE?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This is a theoretical work so there is no negative impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and your encouraging statements. We have answered both your questions below. We hope you will consider increasing your score in light of these updates.
**Why Stochastic Non-Convex Opt? Isn’t that hard?**: You are correct. Solving a stochastic non-convex optimization problem is hard. However, it has been well studied and several algorithms exist with global convergence guarantees. We employ one such algorithm, BLiN, from the X-armed bandits family. In contrast, very few stochastic algorithms exist (to our knowledge, none with guarantees) from the game theory literature for directly approximating Nash equilibria of $n$-player, general-sum normal-form games. Therefore, while stochastic non-convex optimization is hard, reformulating the problem of approximating Nash in that framework, opens up the possibility of applying a much larger class of algorithms than what is currently available.
**SGD Lacks Global Guarantees**: It is correct that stochastic gradient descent may converge to a local instead of a global minimum, and that is problematic. This is why we explore using a non-gradient method like BLiN which enjoys global convergence guarantees. Regarding the classical Lemke-Howson (LH) algorithm, it is designed specifically for 2-player games. For 2+ player games, Govinda-Wilson (*gambit-gnm*) is its closest counterpart. We have run *gambit-gnm* as well as many other classical algorithms from the gambit library [1] (listed below) on the two Blotto games we examine in Figure 3. Only *gambit-enumpoly* and *gambit-enumpure* are able to return any NE within a 1 hour time limit (and only pure equilibria) and *gambit-enumpoly* times out on the larger 4-player Blotto game. In addition, all of these algorithms require storing the entire payoff tensor in memory. For larger games, this is prohibitive whereas our stochastic sample-based algorithm can still run in these cases.
*Proposed Edits*:
- We will add text to the introduction explaining that while non-convex optimization is hard, 1) much progress has been made 2) solving games is arguably harder, and 3) stochastic techniques have yet to be thoroughly explored in the game setting.
- We will add the above gambit results to the paper. We will also explain that we do not expect robust guarantees from running SGD, but believe it is worth investigating empirically.
[1] McKelvey, Richard D., McLennan, Andrew M., and Turocy, Theodore L. (2016). Gambit: Software Tools for Game Theory, Version 16.0.1. http://www.gambit-project.org.
Algorithm descriptions: https://gambitproject.readthedocs.io/en/latest/tools.html
- gambit-enumpoly [73 sec 3-player, timeout 4-player]
- gambit-enumpure [72 sec 3-player, 45 sec 4-player]
- gambit-gnm
- gambit-ipa
- gambit-liap
- gambit-logit
- gambit-simpdiv
---
Rebuttal Comment 1.1:
Title: Continuing questions
Comment: Thanks for the clarification! My concerns are mostly addressed and I believe the construction of objective function that has many desired properties is an interesting contribution. I will keep my positive rating to support this work.
However, I may have other concerns after I read others' review, and I would like to confirm it a little bit. If all NE are interior of the probability simplex, then we can directly solve it by solving the minimum point of either (3) or (6) given in this paper. If some NE are not interior, we need to add a small entropy to the original utility $u_k(x)$.
1. It seems that all NE of this new game will be interior NE of the new game (and QRE of the original game). Then we solve one QRE by minimizing the corresponding objective function (7) of this new game. Is my understanding correct?
2. How do you connect the QRE solution and the original NE? Should there always be a non-vanishing gap?
3. How large is the first term in (13), $\frac{n}{\ln(1/p)}(W(1/e)+(\bar{m}-2)/e)$? Will this term be vanishing for sufficient long training ($T\to \infty$)?
---
Reply to Comment 1.1.1:
Title: Answers to Follow-up Questions
Comment: Dear reviewer, thank you for your positive feedback and for your continued engagement. We have answered your questions below.
1. Yes, your understanding is correct. All NEs in the new game will lie in the interior (see Figure 1 for visual examples), and these will be QREs of the original game. We solve for these equilibria by minimizing (7) as you say.
2. Lemma 14 connects the QRE solution with the original NE. It shows that QREs well approximate NEs at low temperature. And yes, there is always a non-vanishing gap ($n\tau(W(1/e) + \frac{\bar{m} - 2}{e})$) that depends on the temperature $\tau$. In order to shrink this term, one must reduce the temperature $\tau$.
3. No, the first term in (13) does not vanish as $T \rightarrow \infty$, but it can be set arbitrarily small by choosing a low temperature (note $\tau = \frac{1}{\ln(1/p)}$ and the relation to Lemma 14 ). Note that decreasing the temperature increases the number of iterations required for the second term to vanish. Hence, it is left up to the user to decide how close an approximation they want. | Summary: This work studies the computation of Nash equilibria (NE) of normal-form games and proposes a new loss function: the (weighted) sum of the squared norms of the projections of each player gradient onto the tangent space of the unit simplex. The authors show that this loss function is a meaningful surrogate of exploitability when the game has an interior equilibria. Then, the authors provide methods to efficiently construct unbiased estimators of the loss function via unbiased estimation of each player gradients. To extend these results to handle games with only pure equilibria, the authors propose surrogate player utility functions via entropy regularization (with coefficient $\tau$, the "temperature") and show how the modified loss function (based on the modified game with surrogate player utility functions) captures the exploitability of the original game. Next, the authors derive gradient and Hessian expressions for the modified loss function. Leveraging a recent bandit optimization method BLiN, and assuming a sufficiently large temperature (which degrades the convergence rate), the authors provide a high-probability convergence guarantee for computing NE using this approach (loss function + BLiN). Experiments on SGD and BLiN show the effectiveness of the proposed approaches.
Strengths: - Novel observation of the connection between projection of player gradient to simplex tangent space and best response, which lead to the loss function proposed in this paper.
- Extensive studies of the newly proposed loss function in terms of its gradient, Hessian, and other properties.
Weaknesses: Some technical details seem to require further clarification. See **Questions**.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Since BLiN is technically a zeroth order method (pulling an arm <==> sampling a function value), can you elaborate/repeat, somewhere around Theorem 1, what is the oracle passed into BLiN? I believe it should be a Monte-Carlo approximation through (6) but with the player gradients being the ones with temperatures (entropy regularization). In other words, please point out what needs to be computed in each step of BLiN.
- As stated in 229-231, if a NFG has a unique equilibrium which is also mixed, then $\mathcal{L}$ is strongly convex. Based on earlier results in this paper, are there other conditions that ensure strong or non-strong convexity of $\mathcal{L}$ (or $\mathcal{L}^\tau$)? It would be helpful to state them explicitly, as many stochastic optimization methods can exploit (strong) convexity.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: This is a methodological work that does not have immediate or potential negative societal impact. The limitations are on the technical contributions and are discussed above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and your encouraging statements. Your summary was spot on and your intuition regarding your first question is exactly correct. We hope you will consider increasing your score in light of these updates.
**BLiN Steps and Oracles**: We pass an oracle that is able to produce unbiased estimates of equation (7) in exactly the same way as equation (6) (just replace all gradients with entropy-regularized ones as you said). Every subsequent step of BLiN makes a call to (6) with an increasing batch size. That batch is split in half to generate estimates of each of the gradients in the squared norm separately.
**Conditions for Strong Convexity**: Excellent question. We would love to be able to say more here, but at the very least, we cannot expect strong convexity if the game has multiple disconnected equilibrium points. You can see from our Figure 1 that even small 2-player games can induce non-convex landscapes. We are able to state conditions for strong-convexity in the zero temperature setting because it avoids a complicated analysis of the third-order tensor in the second term of the Hessian. We must study the non-zero temperature setting to understand conditions for strong-convexity in the partially-mixed and pure equilibria, but so far that analysis evades us.
*Proposed Edit*: As you suggested, we will add a statement similar to above that explains the BLiN procedure when applied to our setting. | Summary: This work studies solving Nash Equilibria (NE) by stochastic unbiased optimization. The main contribution is providing a new loss function based on the gradient norm of the utility function, and finding the NE by using standard stochastic optimization methods (like Lipschitz bandit algorithms and stochastic gradient descent (SGD)). The authors also carried our experiment results on several games to show the scalability of their proposed methods.
Strengths: The presentation of this work is very clear. The experiment results are comprehensive and back up the main claims of this work. The results are also significant as they point out a new way to solve the NE problem in general.
Weaknesses: More remarks are supposed to be added to the main text. For example,
- What does 's' mean in the legend of Figure 3?
- In Table 1, why the obstacle of NI method is 'max of random variable'? I did not see any max operator in the definition of the loss function of NI.
- In Table 1, for the unconstrained method, can the authors show one concrete example to show why it 'lose the ability to sample from strategies when iterates are no longer proper distributions', as stated in line 113?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The same to the 'Weaknesses' section.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This work aims to solve an open problem about the algorithmic game theory, thus it does not need to address the potential negative societal impacts of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and your support! Indeed, we see this as a completely novel and scalable approach to solve games and we hope others can build and improve on this work. We believe we can easily answer each of your questions as follows:
- Thank you for pointing out our omission of any description of “s”! It indicates the number of Monte Carlo samples used to estimate a gradient (i.e., the “batch size”, equiv. the number of joint actions sampled).
- In Table 1, NI is defined as a sum over $\epsilon_k$’s. If you look further down the page, you’ll see $\epsilon_k$ is defined via a “best-response” $BR_k$ which contains an $\arg\max$. More directly, we can equivalently define $\epsilon_k = \max_z u_k(z, x_{-k}) - u_k(\boldsymbol{x})$ which makes the appearance of the max operator obvious.
- Thank you for raising this. “No longer proper distributions” was probably poor word choice. We mean to say “no longer a vector of probabilities”. For example, it's clear how to sample 1 of 2 pure strategies from a probability vector that looks like [0.3, 0.7]. But how do you sample 1 of 2 pure strategies from a vector that looks like [-0.2, 1.6]? Do you softmax it first? Do you shift and normalize it? Any of these operations is nonlinear and would be problematic for the same reasons as the others in Table 1.
We hope you will consider increasing your score in light of these updates.
*Proposed Edits*:
- We will add a description of “s” to the figure caption in addition to describing the baselines in the legend.
- We will add a note to the table caption with the definition of $\epsilon_k$ so it is clear this term hides a max operator.
- We will add text to the appendix elaborating on the issue of sampling strategies when they do not lie on the simplex, including a concrete example like the one above.
---
Rebuttal Comment 1.1:
Title: Reply to the authors
Comment: Thanks for your reply. I will keep my score as it is.
Best, | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback! We really appreciate the interesting and constructive questions that were raised, and believe they will help us improve the exposition of the paper. In each of our responses, we answer your questions and propose edits to address them in the paper. Please let us know if the proposed edits satisfy your concerns and/or whether you have further suggestions.
Overall, it seems the reviewers found the proposed approach to be innovative and the presentation to be very clear with comprehensive experiments and analysis. We have pulled and grouped a few of the reviewers' own quotes below for convenience.
**Innovative**:
- **ksS3**: N/A
- **JGDw**: “The results are also significant as they point out a new way to solve the NE problem in general”
- **Deih**: “Novel observation of the connection between projection of player gradient to simplex tangent space and best response”
- **rzbR**: N/A
- **4TEh**: “promising approach, and has the potential to bring many new insights to equilibrium computation”
**Clear**:
- **ksS3**: presentation rating: excellent, “The paper is well-written”
- **JGDw**: “The presentation of this work is very clear”
- **Deih**: N/A
- **rzbR**: “Regarding the quality and clarity, this paper is sufficiently complete. It also provides clear backgrounds”
- **4TEh**: “Furthermore, the presentation and the writing are overall clear, and the authors accurately place their results into the existing literature”
**Comprehensive Experiments & Analysis**:
- **ksS3**: N/A
- **JGDw**: “The experiment results are comprehensive and back up the main claims of this work.”
- **Deih**: “Extensive studies of the newly proposed loss function in terms of its gradient, Hessian, and other properties.”
- **rzbR**: “enhance the understanding and applicability of loss functions…Finally, empirical results further verify the theoretical analysis.”
- **4TEh**: N/A | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a loss function (optimization problem) for normal-form games to estimate Nash equilibria which can be solved via unbiased stochastic optimization. They do this by relating their proposed loss function with exploitability. They also provide theoretical guarantees (under some technical conditions) of using bandit stochastic gradient algorithms to solve their proposed problem. They show the applicability of their method by conducting some numerical experiments.
Strengths: The paper tackles an important problem in the game theory of estimating Nash equilibria using optimization. It proposes a potentially scalable solution and provides theoretical guarantees for the same. The paper is well-written (albeit a bit notation-heavy) and the content is easy to follow.
Weaknesses: The authors propose an optimization problem with possible unbiased estimators. However, the proposed problem is still non-convex and it is not clear to me whether it can be solved efficiently with SGD with a potentially large number of saddle points. I understand the analogy to deep learning problems but recent work ([1] and related papers) have shown that those problems carry some interesting structure. Similar properties are unknown (and are perhaps more difficult to establish) for the proposed function.
Du, S., Lee, J., Li, H., Wang, L., & Zhai, X. (2019, May). Gradient descent finds global minima of deep neural networks. In International conference on machine learning (pp. 1675-1685).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Could authors comment on the applicability of their methods on real problems in the context of my comments in the "Weaknesses" section?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations are addressed adequately by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and your intriguing question! We have answered below. We hope you will consider increasing your score in light of these updates.
**SGD Lacks Global Guarantees**: We agree that it remains unknown whether SGD and/or other gradient-based methods can cope with the potentially many saddle points and local minima of our proposed loss function. This is what motivated us to reproduce the experiment of [12] in Figure 2, which reveals an interesting story on “real” normal-form games. For some games (e.g., 2-player Sheriff), the loss has many local minima (every circle plotted @ $\alpha=0$ indicates a local minimum) which could be problematic for a gradient-based solver. However, for other games (e.g., 3-player Leduc poker), the loss only has a few suboptimal local minima, but has many saddle points. In that case, there exist gradient-based solvers that are specifically designed to circumvent saddle points [12]. Moreover, we analyze this property of our loss in Figure 3 in the Blotto game. In both Blotto games, SGD asymptotes to a positive (suboptimal) level of epsilon. We are able to analyze the Hessian at the end of 10k iterations and measure its spectrum to determine that SGD is not converging to a local minimum, but rather being temporarily slowed by a saddle point! We only pointed this out in the last sentence of Figure 3’s caption due to space constraints, but we should emphasize this.
Hence, to answer your question, we believe our preliminary analysis shows that while the loss is non-convex in general, it looks like SGD can in fact successfully minimize this loss in some cases. And in other cases, where saddle points are a problem, we should be able to employ more sophisticated gradient-based methods as in [1] capable of circumventing these saddle points.
Furthermore, note that the analysis of loss landscapes for deep networks requires considering different choices of architectures, nonlinearities, drop-out, and other network design options. In contrast, our loss is polynomial for any normal-form game. In that sense, we already have a much better understanding of the landscapes induced by our loss than researchers initially had of neural networks. We hope we can similarly make quick progress in the game setting.
*Proposed Edit*: We will try to incorporate parts of this discussion into the main body and a longer version into the appendix. | null | null | null | null | null | null |
Accelerating Value Iteration with Anchoring | Accept (poster) | Summary: New variants of value iterations based on anchoring
Complexity lower-bound that shows that their algorithms is optimal in a worst-case sense
Nice extensions for various important cases ($\gamma =1$, approximate VI, gauss-seidel, infinite state-action spaces).
Strengths: * Strong theorems, able to overcome the shortcomings of the analysis of some prior work that only prove acceleration for linear operators [1, 37]
* Well-written and easy to follow
Weaknesses: * A bit sketchy on the terminology: “rate” is not defined, some inaccurate statements (“O(1) rate for VI when $\gamma \approx 1$”), see my questions below
* No numerical comparisons with existing methods
* No analysis in the model-free setup (using sampling)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: p1, l20: “the optimal rate for the VI setup was not known” : isn’t it Theorem 3 in [37] ? This provides a lower-bound on the rate for both value iteration and value evaluation, and shows that VI achieves this rate.
p1, l26 - contributions: I am a bit puzzled by the formulations of “$O(1/k)$ rate for $\gamma \approx 1$” etc. While this becomes clear when the exact theorems are stated, it may be worth expanding here on the precise formulations of your results.
p2, l45, definition of Bellman operator and Bellman consistency: please mention that you overload the notation. Right now $T^\pi$ is both an operator of $R^{|S|}$ and $R^{|S|\times|A|}$.
Section 1.1: please define what is the rate of an algorithm. There may be inconsistencies since usually rate suggests that the convergence is linear and the rate $\rho$ means that error decreases as $\rho^k$ after k iterations; whereas I believe that the rate of p3, l83 $O(1/k)$ is for the error to decrease as $1/k$ after iterations. Please clarify.
Distinctions between anchoring and Nesterov’s acceleration: some recent works show that in some setup, anchoring and Nesterov’s acceleration are exactly equivalent - they are the same algorithm up to some reformulation [0]. Does this connection hold in your setup?
[0] Tran-Dinh, Q. (2022). The Connection Between Nesterov's Accelerated Methods and Halpern Fixed-Point Iterations. arXiv preprint arXiv:2203.04869.
p4, l122: why not simplify the expression of $\beta_k$ ? The denominator is just a partial geometric sum.
p4, lines 151-152: this is quite sketchy, since “rate” has not been defined (see my previous comment). The comments about “$O(1)$ rate for VI when $\gamma \approx 1$” do not make sense to me. I understand $\gamma \approx 1$ means that VI will converge slowly, but it is more convincing to compare the number of iterations before returning an $\epsilon$ optimal solution.
Choice of U0: to satisfy the inequalities of line 152, one can choose either $U_0 = 0$ or $U_0 = R / (1-\gamma)$, with $R$ an upper bound on the rewards. Would that be a good choice in practice though?
Theorem 1: is your choice of \beta_j the optimal choice?
Theorem 5: is your MDP instance different from the MDP instance from Theorem 3 in [37]? Please clarify.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations:
* No significant limitations apart from the absence of numerical simulations. It is not clear if the algorithms proposed in this paper perform well; beating vanilla value iteration is not really hard in practice.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful feedback.
Weakness
(i) (+Questions (iv) (vii))
Thank you for this point. By "rate", we meant the $\mathcal{O}$-dependency of the error. We will define our use of the term "rate" and we will revise the statement "$O(1)$ rate for VI when $\gamma\approx 1$" to make it more precise. (By $\gamma\approx 1$, we meant $\gamma> 1-1/k$.)
(ii) (+limitation) We performed several numerical experiments for Anc-VI and found that Anc-VI provided a practical acceleration only in some cases. (Our rate is a worst-case rate, so when the MDP is not a worst-case instance, there is no guarantee that the actual rate of Anc-VI is better than regular VI.) However, we were not yet able to find an adequate theoretical or heuristic explanation that tells us when we can expect the anchor to provide a practical acceleration, so we chose not to present them and left this issue to future work.
(iii) In fixed-point theory and minimax optimization literature, to the best of our knowledge, whether anchoring provides an acceleration in stochastic settings is still an open problem. The analysis Anc-VI in a model-free setting is indeed an important direction that we plan to study in future work.
Questions
(i) Sorry for the confusion. We try to indicate the optimal rate in terms of Bellman error, not the distance to the optimal value function. We will clarify this in revision.
(ii), (iii) We will clarify overload of notation and reflect your comment in revision.
(v) Thank you for pointing out this reference.
This work does reveal a connection between anchor acceleration and Nesterov's acceleration, but we claim that the two mechanisms are different in the following sense. First, following reformulation scheme of Theorem $3.2$ in this reference, A-VI in [1], a Nesterov-type VI, can not be reformulated to Anchoring type VI since reformulation only covers restricted Nesterov type algorithm. Second, there is line of research for continuous-time models of acceleration, and [2] and [3] showed that anchor acceleration and Nesterov's acceleration have distinct ODE models. It is our view that Nesterov's acceleration and anchor acceleration have a connection but are substantively distinct, but studying the connections and differences between these two acceleration types is certainly an interesting direction.
(vi) Now that we think about it, the reviewer is probably correct that the expression is simpler when we carry out the partial geometric sum. Thank you for pointing this out. We will make that change in the revision.
(viii) Although we believe that $U_0=0$ is a natural choice, it is probably not a "good" choice for all MDPs. In our view, the requirement $U^0\le TU^0$ is not an onerous one, but it can be a cumbersome one. We feel that this requirement is probably an artifact of the analysis, and we will try to relax it in our future work.
(ix) We believe it is likely that there is a better choice of $\beta_j$ that slightly improves the constant. However, our choice $\beta_j$ is a relatively simple one that attains a rate that is optimal up to a constant factor of $4$, so it is the optimal choice in that sense. In terms of our analysis, our choice of $\beta_j$ is the coefficients that optimize our given analysis.
(x) The worst-case MDP in [4] and worst-case MDP in our paper are different primarily in their rewards: $r(s_i, a_1)=\mathbf{1}_{\{i=1\}}$
and $r(s_i, a_1)=\mathbf{1}_{\{i=2\}}$, respectively. Less significantly, our worst-case MDP has one additional state, but the transition probabilities are essentially the same. Both MDPs have only one action. We will clarify this point in our revision.
[1] V. Goyal and J. Grand-Clément. A first-order approach to accelerated value iteration. Operations Research, 71(2):517–535, 2022.
[2] Su, W., Boyd, S., and Cand\`es, E. J. A
differential equation for modeling Nesterov’s accelerated gradient method: Theory and insights. Journal of Machine Learning Research, 17(153):1–43, 2016.
[3] J. J. Suh, J. Park, and E. K. Ryu. Continuous-time Analysis of Anchor Acceleration arXiv:2304.00771, 2023.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their very detailed responses. I think that they did a great at addressing my remarks and questions. | Summary: The authors focus their study on the theoretical analysis of a (simple) variation of the Value Iteration (VI) algorithm, a classical and grounding tool behind many modern (deep)-RL algorithms.
The variation considered by the authors incorporates anchor acceleration mechanisms leading to what the authors call Anchored Value Iteration (Anc-VI). More specifically, rather than repeatedly applying the bellman consistency/optimality operators $T$ to reach the fixed point (i.e., $x_{k+1} = T x_k$), Anc-VI obtains the next point as a convex combination between the initial point and the result of applying the operator on the previous point, namely $x_{k+1} = \beta_k x_0 + (1-\beta_k) T x_{k}$. Naturally, the sequence $ \beta_k$ is a vanishing sequence.
After introducing the algorithm, the authors proceed with an in-depth analysis of the convergence rates of Anc-VI.
More specifically:
1) They study the convergence rate of the Bellman error (i.e, $||Tx_k - x_k||_{\infty}$) both for the consistency and the optimal operators. The authos show that Anc-VI converges at rate $\mathcal{O}\left( 1/k \right)$ for $\gamma \approx 1$ (while standard VI has rate $\mathcal{O}(1)$).
2) Under mild assumptions, the authors derive a lower bound that shows that the convergence rates of Anc-VI are tight up to a constant factor of 4.
3) The author extends their study to the case in which $\gamma=1$, where VI may not converge to a fixed point even if one exists. Anc-VI, on the other hand, converges to some fixed point asymptotically, and the bellman error shrinks to 0 at a linear rate.
4) Finally, results are extended to the approximated value iteration setting and to the Gauss-Seidel variation of VI.
Strengths: VI is a grounding tool of many modern RL algorithms, and its analysis has gathered the community's attention for a long time. The authors thoroughly review existing works and properly contextualize their results within the field.
Applying the idea of anchoring to Value Iteration is, to the best of my knowledge, novel and leads to some surprising results, which have all been stated in summary above.
More specifically, the most relevant ones are:
1) Optimality of the accelerated rate of Anc-VI (i.e., upper bound matches lower bound up to constant factor).
2) Convergence in the undiscounted setting.
Given the fact that VI is the grounding core of many modern methods, I think these results are of interest to the RL community at NeurIPS. For this reason, I recommend acceptance.
On the clarity. Overall, the paper is a nice read, and not particularly hard to follow.
Weaknesses: Not much.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) Do you have any intuitive explanation of why such stronger results can be obtained with anchoring? Why should be want to anchor our search close to the initial point? I think this point might add value for possible future development of RL algorithms that might take inspiration from this work.
2) Why should we anchor only on the initial value rather than doing a, e.g., linear combination of the past value functions? (except for memory requirements).
3) Is there any existing variation of VI whose upper-bound matches the lower-bound?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss Limitations in Appendix H. Furthermore, the value of each theoretical result is properly discussed after presenting each result.
Potential negative societal impact. The paper deals with foundational research on the convergence rates of Value Iteration. I don't see a direct path to negative applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are happy to hear that the reviewer found our work interesting.
Questions
(i) Referring to prior works [1,2,3], we conjectured that the anchoring mechanism, which pulls the present iterates toward the anchoring point, provides stability and prevents a certain type of cycling behavior. However, we acknowledge that our intuitive understanding of the anchor mechanism is not very strong, and the question of 'why' (beyond the convergence proof) is an interesting direction of future work.
(ii) In fixed-point theory literature, there is a line of research that analyzes the convergence of the anchoring mechanism with arbitrary anchoring points for nonexpansive operators [Section 30.1, 4]. It seems reasonable to consider an anchor that moves, perhaps as a linear combination of past points, and exploring the effectiveness of such variants is an interesting direction.
(iii) As far as we know, there is no prior variant of VI that matched a lower bound. It is worth mentioning that [5] proposed A-VI, a variant of VI that uses Nesterov-type momentum, and showed that it exhibits an accelerated rate in terms of distance to the value function for the Bellman consistency operator. However, this result requires a "reversibility" assumption on the MDP.
[1] T. Yoon and E. K. Ryu. Accelerated algorithms for smooth convex-concave minimax problems with
$O(1/k^2)$ rate on squared gradient norm. International Conference on Machine Learning, 2021.
[2] J. Park and E. K. Ryu. Exact optimal accelerated complexity for fixed-
point iterations. International Conference on Machine Learning, 2022.
[3] J. J. Suh, J. Park, and E. K. Ryu. Continuous-time Analysis of Anchor Acceleration arXiv:2304.00771, 2023.
[4] Bauschke, H. H. and Combettes, P. L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, second edition, 2017.
[5] V. Goyal and J. Grand-Clément. A first-order approach to accelerated value iteration. Operations Research, 71(2):517–535, 2022.
---
Rebuttal Comment 1.1:
Title: Ack
Comment: I thank the authors for their rebuttal, and for answering my questions. After reading all reviews, I confirm my score. | Summary: This paper considers an anchored version of Value Iteration and derives accelerated rates in terms of the Bellman error for both the Bellman consistency and optimality operators. Then, the work addresses the particular case of $\gamma =1$ with a $O(1/k)$ rate that VI fails to guarantee via the standard contraction argument. A complexity lower bound matching the upper-bound (up to a constant numerical factor) is then established. The paper further proposes an error propagation analysis of approximate anchored VI, extending the results of the exact case and a Gauss-Seidel version of the algorithm is also analyzed.
Strengths: - I find the fact that the Anc-VI algorithm is able to address the case $\gamma =1$ interesting and the anchoring mechanism seems to be well suited for this case unlike VI.
It is interesting to see that the anchoring idea which was recently investigated in depth in optimization is also interesting in the DP setting. While the idea may seem natural as DP involves fixed point iterations, the execution requires to address several technical challenges that are more specific to the DP setting.
- The contributions of the paper are solid and somehow comprehensive with accelerated convergence rates, a lower bound and extensions to inexact and Gauss-Seidel variants.
- The anchoring mechanism can also be used for further extensions in DP/RL for the design of learning algorithms beyond the deterministic setting.
- The paper is well-written and the contributions are overall clearly stated.
Weaknesses: 1. Even if the Bellman error is a natural quantity as a performance measure for fixed point iteration schemes, it would be nice if the paper can comment more if possible on the motivation to consider convergence guarantees in terms of this performance measure compared to the distance to the optimal value function. It is mentioned in l. 18 that the distance to the optimum is not computable and that the Bellman error can be monitored which I find a valid and fair interesting point. However, as a matter of fact, the lower bound provided in [37, Theorem 3] (which is mentioned in the paper) in terms of distance to the optimal value function is actually achieved for Value Iteration. If we would like to approximate the optimal value function as fast as possible to find an optimal policy in a control task for instance, why would we use anchored VI instead of VI? See also the related question 5 in the ‘Questions’ section below.
2. It is not discussed how to find a near-optimal policy using Anchored VI. I guess you can just output a policy that is greedy with respect to the output of the Anchored VI algorithm after a certain number of iterations as it is usually done with VI for which there are policy error bounds. Given that Section 1.1 mentions optimal policies, it would be nice to mention that the Anchored VI could be further enhanced to find near-optimal policies as an approximate planning algorithm.
3. The paper only discusses guarantees in terms of the Bellman error comparing with the translated result for VI from distance to optimal value function $\|\|U^k - U^{\star}\|\|_{\infty}$ to Bellman error. It is also similarly possible to translate the Bellman error guarantee to a distance to optimality result. See question 5 below.
4. Case $\gamma = 1$: How can the (action) state value functions be defined in that case? Definitions of Section 1.1 may not be relevant anymore. While the paper states that that ‘a full treatment of undiscounted MDPs is beyond the scope of this paper’, I think at least adding references would be useful to give a meaning to the problem and hint to the fact that fixed points can be guaranteed to exist (as currently assumed in the paper) under some technical assumptions. This would justify that the assumption is reasonable and allows to avoid some technicalities.
5. I think the paper can comment on the advantages of the Gauss-Seidel Anc-VI and why this extension is considered. Section 6 does not discuss the motivation for considering such an algorithm. Is it for the possibility to perform asynchronous updates via its coordinate-wise update rule?
**Minor:**
- I find the terminology ‘linear operator’ for the Bellman (consistency) operator a little confusing even if Puterman 2005 (p. 144, Eq. (6.1.7)) uses for e.g. the terminology ‘linear transformation’. A linear operator T would satisfy T(0) = 0 according to the standard mathematical definition of a linear operator, which is not the case for the Bellman consistency operator. Maybe affine would be more appropriate.
- l. 137-141: I would say the result for the Bellman ‘consistency’ operator can also be relevant for policy evaluation which is useful as a subroutine for several algorithms in RL beyond value iteration.
- As a suggestion, the paper could add a figure for the hard MDP instance to ease the reading (see for e.g., Figure 1 in [37]). It also seems that the hard MDP instance is the same as the one in [37], it may be worth it to mention this in that case.
- There is sometimes some redundancy in the proofs. For instance the Hahnn-Banach argument is used identically 2 times in l. 568, l. 590. It is again used for Q functions in l. 578 and l. 600. This also happens for some inductions throughout the proofs which are very similar.
- Lemma 15 in appendix: in the proof, ‘by definition W, there exist W’, I guess you mean there exists $k$.
- l. 513 and later: $\lim … \to $, I guess $\to$ can be replaced by $=$ here.
- Several grammatical articles are missing in writing, especially in the appendix: l. 225 (a complexity …), l. 507 (a nonexpansive), 510, 515, 520, 664, 700, 759, 760, 764 and several other places.
**Typos:**
l. 111, 113: ‘nonexpensive’;
l. 116: ‘operator’.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Besides the comments above, please find some questions below for clarifications:
1. l. 149 - 151: is the comparison of the rates with VI straightforward from the rates or does it require some algebraic manipulation? It does not seem to be immediate from the expression, please provide more details to clarify in the appendix if needed.
2. It is mentioned in the paper that the anchoring mechanism is ‘distinct from Nesterov’s acceleration’. The latter mechanism has inspired the work [37] which requires for instance some reversibility conditions as discussed in related works. A recent work (Tran-Dinh 2022) draws connections between Nesterov’s accelerated methods and Halpern fixed point iterations. Could you comment on this?
Quoc Tran-Dinh, 2022. The Connection Between Nesterov’s Accelerated Methods and Halpern Fixed-Point Iterations.
https://arxiv.org/pdf/2203.04869.pdf
3. In the abstract, it is mentioned that ‘the optimal rate for the VI setup was not known’. Do you mean for the Bellman error (as the rest of the abstract and the paper state results for the Bellman error)? Please precise in that case, I found this a bit confusing at first read.
4. How do you come up with the specific way you set the parameter $\beta_t$ in l. 122? It seems to match the usual $\beta_t = 1/(t+1)$ used for anchoring/Halpern iterations in the case where $\gamma = 1$. Could you provide more intuition about this?
5. Results on the Bellman error can also be translated to the optimality measure $\|\|U^k - U^{\star}\|\|$
using the triangle inequality at least for $0<\gamma<1$
to obtain $\|\|U^k - U^{\star}\|\| \leq \frac{1}{1-\gamma} \|\|U^k - T U^k\|\|$. How would this result compare to VI? Is it meaningful to include a comment along these lines for comprehensiveness given that you discuss translating the result for VI to control the Bellman error? I understand though that the anchoring mechanism is better suited to control the Bellman error.
6. l. 290 - 292: How would the GS update rule would even be defined in infinite dimensions even before talking about the Hahn-Banach theorem for the analysis? Why Hahn-Banach theorem would not be applicable?
7. Minor comment, l. 240: for the upper bound, it seems that one can obtain a constant equal to 2 using the following derivations:
$$\frac{(\gamma^{-1} - \gamma)(1+\gamma - \gamma^{k+1})}{(\gamma^{k+1})^{-1} - \gamma^{k+1}}
= \frac{\gamma^k (1-\gamma) (1+\gamma - \gamma^{k+1}) }{1-\gamma^{2(k+1)}}
= \gamma^k \frac{1}{\sum_{i=0}^{2k+1} \gamma^i} (1+\gamma - \gamma^{k+1})
\leq \frac{2 \gamma^k}{\sum_{i=0}^k \gamma^i}.$$
**Proof related questions for clarifications:**
8. About the Hahn-Banach argument (for infinite state action space): Could you please provide clarifications regarding the following questions?
- Lemma 8, 9 in the appendix: What are $U$, $\tilde{U}$, $\bar{U}$ in the statements? If these are arbitrary (as they are instantiated later in the proofs of Lemma 10,11 for example), please precise the statement of the lemmas for clarity.
- The definition of the operator $\mathcal{P}$ (l. 600 for e.g., and others) is not very clear to me. Do you define it only for multiples of $\bar{Q}$ where $\bar{Q}$ is defined in the lemma statement or do you define it for every $Q$? Why do you introduce $c$ in the definition? I guess this is because you need homogeneity to invoke the Hahn-Banach theorem but I would expect that this would be verified once you define the sublinear function for the application of the theorem. Also, it is not clear to me why is the $\mathcal{P}$ as defined a linear functional on M (because of the inf) with $\|\|\mathcal{P}\|\|$ = 1, which norm is this? Is it the operator norm (please define it in this case)? The notation $\bar{Q}$ is a little confusing. If $\mathcal{P}$ is defined by the inf (without c) for any $Q$, then you can show sublinearity (or rather ‘superlinearity’ with the inf) and homogeneity. The restriction to the span of $\bar{Q}$ would be a linear functional (as you mention it) and then you can conclude I guess. If this is the reasoning you conduct, please clarify. I suggest to state the Hahn-Banach theorem and its application or at least to clarify what is the sublinear function $p$ used and what is the linear functional dominated by $p$, especially the use of the notation $\bar{Q}$ for defining $\mathcal{P}$.
- Minor, l. 568, l. 590: ‘if action space is infinite’? Is it rather state space here? Same comment for other occurrences.
9. Case $\gamma =1$: l. 672, 673 in the proofs in appendix, it is stated that the $O(1/(k+1))$ rate is obtained after taking the infinity norm in Lemmas 4 and 10. Does the $O(1/(k+1))$ rate just follow from using $\beta_k = 1/(k+1)$ or is there any particular upper bounding manipulation? I think more clarifications would be useful for the reader.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does list the fact that the analysis of VI is not sufficient to understand modern deep RL (in appendix H in the end of the paper) and comments on the potential of future work in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the highly detailed and constructive feedback.
Weakness
(i) As an analogy in the optimization literature, the recent discovery of OGM [1] and OGM-G [2] demonstrate that considering a different performance measure leads to a different optimal algorithm. In this setting, we show that Anc-VI is an optimal algorithm when we consider optimally reducing the Bellman error, but we are not claiming that the Bellman error is "better" measure compared to the usual distance to the optimal value function. Our point is that if we consider the Bellman error (which is also natural) Anc-VI is an optimal algorithm. However, one argument for the Bellman error is that we can get a meaningful rate and a point convergence result when $\gamma\approx1$ or $\gamma=1$ only when we consider the Bellman error. We will incorporate this discussion in the revision. We will comment on this point in our revision.
(ii) Thank you for your comment. We will define a near-optimal policy and explain how it can be derived from Anc-VI in revision.
(iii) (+ Question $5$)
Yes, we can. If we translate Bellman error of Anc-VI to the distance to optimal value function using $\|U^k-U^{\star}\| \le \frac{1}{1-\gamma}\|TU^k-U^k\|$, Anc-VI shows same convergence rate $O(\gamma^k)$ with VI but constant factor slower by
\begin{align*}
\gamma^{k}(1+\gamma)\frac{1+2\gamma-\gamma^{k+1}}{1-\gamma^{2k+2}} \ge \gamma^{k}(1+\gamma).
\end{align*}
We will add this argument in the revision.
(iv) We could not find prior works about technical condition of well definiteness of value function when $\gamma=1$, But we could think of the following MDP instance: If an undiscounted MDP has bounded reward and the probability of transitioning to a terminal state is larger than some fixed positive constant (for all current state and action) this undiscounted MDP has finite value function since $\sum^{\infty}_{i=0}nx^n$ converges for $|x|<1$. This is one condition for well definiteness of value function, and pursuing a systematic study of such a condition may be an interesting direction.
(v) We were careful in commenting on what we did not yet prove, but, yes, we think Gauss-seidel Anc-VI is a stepping stone to analyzing asynchronous coordinate update version of Anc-VI, and we plan to study this direction in our future work. We will comment on this in revision.
Minor and typos
(i) We agree with your point on the definition of a linear operator. Although we followed the convention of [3], we will update the terminology in our revision.
(ii) Thank you for this point on the usefulness of the results for the policy evaluation setup. Our intented point of line 137-141 was that the Bellman optimality setting is more difficult due to the nonlinearity and that we are able to provide an acceleration in both the consistency and optimality setups while prior works don't. We will adjust our wording.
(iii) The worst-case MDP in [4] and worst-case MDP in our paper are different primarily in their rewards: $r(s_i, a_1)=\mathbf{1}_{\{i=1\}}$
and $r(s_i, a_1)=\mathbf{1}_{\{i=2\}}$, respectively. Less significantly, our worst-case MDP has one additional state, but the transition probabilities are essentially the same. Both MDPs have only one action. We will clarify this point, and we will add a figure to illustrate the hard MDP instance.
(iv) As the Hahn-Banach argument is abstract and delicate, we tried to make the argument precise and explicit for every case. However, we will revise the proofs to reduce the redundancy.
(v, vi, vii, Typos) Thank you for the detailed corrections. We will correct the errors in our revision.
Questions
(i) Comparison in line 149-151 comes from direct calculations, but we agree with that algebraic manipulation is not immediate. We clarify this in revision.
(ii) Thank you for pointing out this reference. This work does reveal a connection between anchor acceleration and Nesterov's acceleration, but we claim that the two mechanisms are different in the following sense. First, following reformulation scheme of Theorem $3.2$ in this reference, A-VI in [4], a Nesterov-type VI, can not be reformulated to Anchoring type VI since reformulation only covers restricted Nesterov type algorithm. Second, there is line of research for continuous-time models of acceleration, and [5] and [6] showed that anchor acceleration and Nesterov's acceleration have distinct ODE models. It is our view that Nesterov's acceleration and anchor acceleration have a connection but are substantively distinct, but studying the connections and differences between these two acceleration types is certainly an interesting direction.
(iii) Sorry for the confusion. Yes, we mean the optimal rate in terms of Bellman error. We will clarify this in revision.
(iv) [7] studied anchor acceleration for $\gamma$-contractive non linear operator respect to $l_2$ norm in Hilbert space. Inspired by this paper, we tested several candidates of $\beta_n$, $\frac{1}{\sum^n_{i=0} \gamma^{ki}}$ for $k=1,2,3$,
and we found that the choice $k=2$ gives analytically simple and accelerated rate as in Theorem $1$ and $2$.
(vi) Sorry for the confusion. We can consider block GS update by dividing action space into finite disjoint sets, and `is not applicable' might not be proper expression. We briefly mention that even if we apply Hahn-Banach theorem on block GS update setting, it does not lead us to valid convergence result since argument of Lemma $8$ is not valid for product of multiple operators $\mathcal{P}_n \dots \mathcal{P}_1$ which appears in proof of Lemma $19$. We will clarify this in revision.
(vii) We believe there is little mistake on your calculation since
\begin{align*}
\frac{(\gamma^{-1}-\gamma)(1+2\gamma-\gamma^{k+1})}{(\gamma^{k+1})^{-1}-\gamma^{k+1}}
=\gamma^{k}\frac{(1-\gamma^2)(1+2\gamma-\gamma^{k+1})}{1-\gamma^{2k+2}}.
\end{align*}
**Rebuttal continues in common response**
---
Rebuttal Comment 1.1:
Title: post rebuttal
Comment: I thank the authors for their rebuttal which addressed my questions in details, I maintain my positive score.
Regarding Bellman error vs distance to optimal value, I wanted to stress that while in optimization first order stationary is a natural quantity to look at in the non convex setting since distance to optimum is not even necessarily well-defined, in RL (and also in the present paper setting), actually both Bellman error and distance to optimal value are meaningful. This is the reason why I am somehow questioning the importance of finding an optimal algorithm for the Bellman error rather than for the distance to optimal value for which value iteration is known to be optimal. I acknowledge that the discussion about $\gamma = 1$ provides some motivation for the Bellman error though (if the distance to optimal value function is not meaningful anymore). | Summary: The paper introduces an accelerated version of the Value Iteration (VI) algorithm, called Anc-VI, based on the anchoring mechanism. The proposed method achieves faster reduction in the Bellman error compared to standard VI even when the discounting factor is close to 1. Meanwhile, a complexity lower bound is also provided which match with the upper bound up to a constant 4. Meanwhile, this work also shows the benefits of anchoring mechanism in approximate VI and Gauss-Seidel VI.
Strengths: 1. This work proposed the acceleration of VI by leveraging the anchoring mechanism
2. The proposed Anc-VI has provable faster convergence rate comparing with classic VI
3. This work provide the acceleration rate for both the bellman optimality operator and consistency operator
Weaknesses: 1. The performance of anchoring mechanism has reliance on the starting point $U_0$ and discounting factor. In order to have the fast acceleration, we need to set $\gamma\approx1$. However, one of the reason that we set discounting factor $\gamma < 1$ is for faster computing in practice. The author need to clarify whether the anchoring mechanism still can benefit the learning acceleration in the practical case, e.g., in Theorem 1, when $\gamma \to 0$, how does the upper bound show the acceleration. Same question apply to Theorem 2
2. In the Apx-Anc-VI, when $U_0$ is far from optimal, according to Theorem 6, the first few iteration does not necessary results in policy improvement (e.g., by setting $k=1$, $\gamma=1$). Thus, what does Theorem 6 can tell on the benefits of using Anchoring mechanism?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. If it is possible to derive any practical algorithm based on the theoretical findings in this work?
2. What is the computing complexity of the proposed method comparing with VI?
3. What is the guidance on choosing $\gamma$ in practice in order to achieve faster convergence, e.g., let $\gamma \approx 1$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors addressed the limitations in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments.
Weakness
(i) (+Question (iii)) Although in some practical setups, discount factor $\gamma$ could be chosen freely, we assumed that environment and MDP and $\gamma$ are given constants.
If $1/2 \le \gamma<1$, Anc-VI exhibits a provably faster convergence rate (First rates of Theorems 1 and 2) than the standard rate of VI, since
\begin{align*}
\frac{(\gamma^{-1}-\gamma)(1+2\gamma-\gamma^{k+1})}{(\gamma^{k+1})^{-1}-\gamma^{k+1}}
=\gamma^{k}\frac{(1-\gamma^2)(1+2\gamma-\gamma^{k+1})}{1-\gamma^{2k+2}}
\le \gamma^{k}(1+\gamma).
\end{align*}
If $0<\gamma<1/2$, these rates don't guarantee acceleration, but if $TU_0 \le U_0$ or $ U_0 \le TU_0$, the second rates of Theorems 1 and 2 are faster than the standard rate of VI for all $0 < \gamma <1$. (Both rates are decreasing functions of $\gamma$.)
(ii) For Apx-Anc-VI and Apx-VI, we didn't include the case $\gamma=1$ since the error diverges as the iteration number increases. If $0<\gamma<1/2$, Apx-Anc-VI exhibits a provably faster convergence rate than the rate of Apx-VI by same argument in (i).
Questions.
(i) As VI serves as the foundational basis of practical RL algorithms such as fitted value iteration and temporal difference learning, we expect that Anc-VI will give insight for designing new practical algorithms or improving existing ones by incorporating the anchoring mechanism. This is certainly an interesting direction that we plan to pursue in our future work.
(ii) Computing complexity of Anc-VI and VI *per iteration* are basically the same; the operation of adding an anchor term is a vector-vector operator, and is therefore negligible compared the evaluation of the Bellman operator, which often involves a matrix-vector operation.
---
Rebuttal Comment 1.1:
Title: Ack
Comment: I thank the authors' effort on the rebuttal.
I think the discussion on the discounting factor when introducing the theorems can be helpful for the general audiences to be more aware of the limitations of the theoretical results.
I will keep my original score. | Rebuttal 1:
Rebuttal: # Common Response
First of all, we thank the reviewers for their constructive and detailed feedback. We were excited to see that all the reviewers found our work valuable. Indeed, as reviewer e9mJ and tQc7 mentioned, acceleration of Anc-VI is guaranteed by ''strong theorem'' and ''leads to some surprising results'', and we expect the anchoring mechanism of Anc-VI to be applicable to more practical setups. Specifically, Reviewer GSn8 and e9mJ mentioned using Anc-VI in model-free setups and with asynchronous coordinates. We believe these are all interesting future directions.
$ $
$ $
$ $
$ $
# Rebuttal of reviewer GSn8 continued
Hahn-Banach argument
(i) $U, \\tilde{U}, \\bar{U}$ are arbitrary function satisfying condition of Lemmas. We will clarify this in revision.
(ii) $\\bar{Q}$ is defined in the Lemma and we introduce $c$ for homogeneity of $\\mathcal{P}$ to invoke the Hahn-Banach theorem as you pointed out. Then, $\\mathcal{P}$ is linear functional in $M$ where $M$ is linear space spanned by $\\bar{Q}$ with $\\|\\cdot\\|_{\\infty}$
-norm. About norm of $\\mathcal{P}$, we apologize for unclear notation and typo. Norm should be operator norm, and in line 601, '$\\|\\mathcal{P}\\| = 1$' should be modified to '$\\|\\mathcal{P}\\| \\le 1$'. This is true since $\\frac{ |c \\inf_{(s',a') \\in \\mathcal{S} \\times \\mathcal{A}} \\bar{Q}(s',a')|}{\\|c\\bar{Q}\\|_{\\infty}} \\le 1 $. We will clarify this and reflect your suggestion in our revision.
(iii) If action space is finite, greedy policy satisfying $T^{\\pi}V=T^{\\star}V$ is well defined and it directly leads to Lemma $8$ and $9$ as we showed in proofs. But if action space is infinite, we can not guarantee existence of greedy policy since maximizer may not exist in action space. In this case, we solved this issue by Hahn-Banach argument.
(iv) If we plug $\\beta_k=\\frac{1}{k+1}$ in Lemma $4$ and $10$, we get $O(1/k+1)$ rate by simple calculation. We will clarify this in our revision.
[1] D. Kim and J. A. Fessler. Optimized first-order methods for smooth convex minimization. Mathematical Programming, 159(1–2):81–107, 2016.
[2] D. Kim and J. A. Fessler. Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions. Journal of Optimization Theory and Applications, 188(1):192–219, 2021
[3] M. L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley and Sons, 1994
[4] V. Goyal and J. Grand-Clément. A first-order approach to accelerated value iteration. Operations Research, 71(2):517–535, 2022.
[5] Su, W., Boyd, S., and Candès, E. J. A differential equation for modeling Nesterov’s accelerated gradient method: Theory and insights. Journal of Machine Learning Research, 17(153):1–43, 2016.
[6] J. J. Suh, J. Park, and E. K. Ryu. Continuous-time Analysis of Anchor Acceleration arXiv:2304.00771, 2023.
[7] J. Park and E. K. Ryu. Exact optimal accelerated complexity for fixed-point iterations. International Conference on Machine Learning, 2022. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Contextual Bandits and Imitation Learning with Preference-Based Active Queries | Accept (poster) | Summary: This paper considers the learning problem of contextual bandits and imitation learning, where the learner lacks direct knowledge of the executed actions's reward (feedback), instead, the learner is only able to request the expert at each round to compare two actions.
[Interaction Protocol] The interaction between the learner and the environment proceeds in rounds with $T$ being the total number of interactions. In each round $t$, the learner first receives the context $x_t$ (which is drawn adversarially), decide whether to send request to the expert, and selects the actions (a pair of actions in the contextual bandit setting as shown in Algorithm 1).
[Preference, Request and General Function Class] For the request associated with a pair of actions $(a_t,b_t)$, the feedback $y_t \in \{ -1,+1}$ indicates either $a_t$ or $b_t$ is better, which follows an unknown preference function $f^\star$ (defined in Line 122). The learner has access to a general function class $\mathcal{F}$ where $f^\star \in \mathcal{F}$, as stated in Assumption 1.
[Goal] The performance of the learner is measured by (see Line 143 for the detailed definition): 1) the regret she suffered, that is, the difference between her total loss, and that of the optimal action; 2) the number of requests sent by the learner.
[Result of Contextual Bandits] With specific link function and online regression oracle defined in Assumption 2, Theorem 1 ensures that the regret is bounded by $\widetilde{\mathcal{O}}(\min\{ \sqrt{T}, \frac{d}{\Delta} \})$ where $d$ is some sort of complexity measurement (Eluder dimension) and $\Delta$ is the uniform gap (the minimal gap between the best and the second best action over all the context) defined in Assumption 3. The number of queries is also bounded as shown in Theorem 1. This result further matches the lower bound up to some factors (see Theorem 2 for more details).
[Imitation Result] Theorem 4 states the result and the details are deferred to Appendix, which I don't have much time to check carefully.
Strengths: The paper is clearly-written, well-organized and rigorous.
1. The proof is self-contained. I haven't observed any mistakes in the lemmas I skimmed.
2. The notations, together with their meanings, are well explained. T
3. The definitions and assumptions are carefully separated.
Weaknesses: 1. Considering the dueling bandit problem, a special case of the problem instance studied , does the regret bound stated in Theorem 1 matches the optimal regret bound for dueling bandits? I am not quite certain about the scale of $dim_{E}(\mathcal{F}, \frac{\Delta}{2A^2})$ in this case.
2. The comparisons between the Theorem 1 and the results of Saha and Krishnamurthy [2022], and Foster et al. [2020] are not very detailed. What's regret bound of the naive conversion of ADACB into regret minimization problem, and how does it relate to this work?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My questions is raised in the weakness section. I am willing to re-evaluate the score if the questions are answered properly.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: This work is pure theoretical, and does not have any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback! We would like to address your concerns as follows.
**1. Comparison to regret bound for dueling bandits**
As established by prior works [1,2], for dueling bandits, the minimax regret rate is $\tilde\Theta(\sqrt{AT})$ and the instance-dependent regret rate is $\tilde\Theta\left(\frac{A}{\Delta}\right)$. Now we reduce our result (theorem 1) into the dueling bandits setting and get
$$
\mathrm{Regret}\_T
\leq \tilde{O}\left(\min\left\\{\sqrt{AT},\frac{A^2 \mathrm{dim}\_E(\mathcal{F},\frac{\Delta}{2A^2})}{\Delta}\right\\}\right)
\leq \tilde{O}\left(\min\left\\{\sqrt{AT},\frac{A^3}{\Delta}\right\\}\right)
$$
where the second inequality holds since the eluder dimension is upper bounded by $A$ for dueling bandits. Consequently, we observe a gap of $A^2$ in the instance-dependent bound between our current rate and the optimal rate of dueling bandits. We believe that the improvement of this gap is an important future direction, and we will add this discussion to the next version of our paper.
**2. comparisons between Theorem 1 and the results of Saha and Krishnamurthy [2022], and Foster et al. [2020] / regret bound of the naive conversion of ADACB into regret minimization problem**
**Comparison to MinMaxCB [Saha and Krishnamurthy, 2022]**: In their setting, they assume that the preference-based feedback is sampled from $(f^\star(x,a,b)+1)/2$, which is a special case of our model (Example 2). Regarding their theoretical results, their regret upper bound is
$$
\mathrm{Regret}_T\leq O\left(\sqrt{AT\beta}\right)
$$
when translated into our notations. This is precisely identical to our worst-case regret upper bound (Theorem 1). However, we improve upon their results by having an additional instance-dependent regret bound. In other words, our algorithm can surpass theirs when the underlying contextual bandit problem exhibits good structure (e.g., small eluder dimension and large gap). Moreover, our algorithm is designed to actively make queries, and we established a guarantee on the number of queries made. In contrast, their algorithm simply queries every round.
**Comparison to AdaCB [Foster et al., 2020]**: Our work shares some similarities with AdaCB, especially in terms of the form of theoretical results, but it differs in three key aspects:
(1) They assume regular contextual bandits where the learner observes the reward directly, while we assume preference-based feedback. Notably, we have a reduction from "learning from reward signal" (their setting) to "learning from preference-based feedback" (our setting) when the reward is Bernoulli and the learner can choose two actions under the same state. To be specific, the reduction works as follows: assume a regular contextual bandit instance and a contextual dueling bandits algorithm. Each time the algorithm generates a comparison query between two actions, we sample the rewards of the two actions from the CB instance and return the action with the higher reward (and we return either action with equal probability in case of a tie). A detailed explanation of this reduction can be found in Appendix A.4. Thus, our setting can capture theirs under such conditions. As clear from the reduction, the regret upper bound of our algorithm remains unchanged when we convert our setting to theirs, i.e., the regret upper bound remains as follows:
$$
\mathrm{Regret}\_T \leq \tilde{O}\left(\min\left\\{\sqrt{AT},\frac{A^2\mathrm{dim}\_E}{\Delta}\right\\}\right),
$$
while for their algorithm, their proposed regret upper bound is
$$
\mathrm{Regret}\_T \leq \tilde{O}\left(\min\left\\{\sqrt{AT},\frac{A \theta^{\mathrm{val}}}{\Delta}\right\\}\right),
$$
where $\theta^{\mathrm{val}}$ denotes the *value function disagreement coefficient*. We note that comparing the eluder dimension and the value function disagreement coefficient is not straightforward since the disagreement coefficient is for stochastic settings while the eluder dimension is for adversarial settings. However, we may still observe a gap of $A$ in the instance-dependent bound. Improving upon this factor is an interesting future direction. Moreover, it is important to note that although a reduction from their setting to ours is already established in our work, the reverse direction (i.e., the reduction from ours to theirs) remains unclear and may require further investigation.
(2) They assume a stochastic setting where contexts are drawn i.i.d., but we assume the context is adversarially chosen. This difference leads to distinct complexity measures in the regret upper bounds: ours involves the eluder dimension, while theirs involves the disagreement coefficient. We are not sure if these two quantities are directly comparable, and we believe that extending our algorithm to the stochastic setting to get a dependence on the disagreement coefficient is an interesting future direction.
(3) It should also be noted that AdaCB does not aim to minimize query complexity, while we consider minimizing query complexity as an important goal.
We will incorporate the above discussion into the next version of our paper.
---
[1] Yue, Yisong, et al. "The k-armed dueling bandits problem." Journal of Computer and System Sciences 78.5 (2012): 1538-1556.
[2] Saha, Aadirupa, and Pierre Gaillard. "Versatile Dueling Bandits: Best-of-both-World Analyses for Online Learning from Preferences." *ICML 2022-39th International Conference on Machine Learning*. 2022.
---
Rebuttal Comment 1.1:
Title: Thank the authors for their response
Comment: Thank the authors for their response. I would like to keep the current score. | Summary: This paper studies the contextual bandit and imitation learning problem with preference-based feedback. The authors propose an oracle-based contextual bandit algorithm, which attains both worst-case and instance-dependent regret bounds. Besides, the algorithm has an instance-dependent guarantee on the querying numbers of the preference-based information. Furthermore, the proposed bandit algorithm is extended to the imitation learning setting with provable guarantees.
Strengths: - the proposed method has strong theoretical guarantees on the regret (both worst-case and instance-dependent bound) and query complexity. Although the oracle-based algorithm proposed shares similar techniques with MinMaxDB [Saha and Krishnamurthy, 2022] and AdaCB [Foster et al., 2020], the authors provide enough discussion to highlight the difference.
- lower bounds are provided to justify the upper bounds on regret, and query complexity is tight up to logarithmic factors
- the paper is well-structured and written
Weaknesses: - about the practical implementation of the proposed method: one of my main concerns about the paper is from the practical side. Similar to the oracle-based algorithm for the standard contextual bandit problem (e.g., SquareCB [Foster et al. 2022]), the proposed method is established on an online regression solver with regret guarantees. However, I'm not sure to what extent such an online regression solver can be obtained with the preference-based feedback model. For instance, as shown in example 1, $f(x, a,b) = r(a,x)-r(x,b)$, the function $f(\cdot)$ is not convex even $r:\mathcal{X}\times\mathcal{A}\rightarrow[0,1]$ is a convex function, and the algorithm developed for online convex optimization is not applicable. I think it would be beneficial if the authors could provide some concrete examples (for example, the reward function has a linear structure?) that the online regression oracle is available.
- about the instance-dependent bound: the proposed instance-dependent regret bound as an $O(\Upsilon^2)$ dependence on the regret of the oracle and an $O(\Upsilon^3)$ on the query complexity. There seems still some room for improvement. In the finite function space case, AdaCB attains an $O(\log \vert\mathcal{F}\vert/\Delta)$ bound for a standard contextual bandit problem, but the result obtained in this paper implies an $O(\log^2 \vert\mathcal{F}\vert/\Delta)$ regret bound.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - could you provide concrete examples of the online regression oracle for the preference-based feedback model? It would be even better if the author could provide more detailed discussions on to which extent such an online regression solver can be established.
- could you provide more discussion on the tightness of the instance-dependent bound, especially on the dependence of $\Upsilon$?
- The expert policy $\pi_e$ is not formally defined. Does $\pi_e$ refer to the policy that can maximize the value function? I am confused by the claim, "our algorithm not only competes with the expert policy but can also surpass it to some extent" in line 343. What is the formal definition of "surpass." Do you mean the regret would go negative due to the term $Adv_T$? However, it is unclear to me when the negative term is large enough to cancel the $O(\sqrt{T}, A/\Delta)$ term.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper has discussed the limitation and potential future work in the conclusion. Another issue is that it imposes a realizable assumption for $f^\star$. It is unclear whether extending the analysis for standard contextual bandit (Section 5 in [Foster et al., ICML 2020]) to the contextual dueling bandit setting is possible.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback! We would like to address your concerns as follows.
**1. Practical implementation of the online regression solver / concrete examples of the online regression oracle**
As a concrete example, when the reward function $r:\mathcal{X}\times\mathcal{A}\rightarrow[0,1]$ is linear, the function $f$ is also linear. In this case, when the loss function is chosen to be convex w.r.t. $f$ (such as square loss and log loss), the online regression oracle can be simply implemented by the online gradient descent. In addition, if $f$ is represented by a matrix (e.g., [4]) and cannot be decomposed into the difference in reward, standard convex optimization methods can still apply.
At a high level, many loss functions, including square loss and log loss, are convex functionals with respect to $f$. Let us assume that $f$ is further parameterized by $\theta$. Even if the loss is not convex w.r.t. $\theta$, we may still apply non-convex programming algorithms such as non-convex FTPL [1].
Moreover, many existing works have explored online regression in various scenarios. For instance, [2] and [3] have investigated online regression with square loss and general function classes. It would be interesting to integrate these works into our method.
**2. Improvement on $\Upsilon$ for the instance-dependent bounds / Tightness of the instance-dependent bound**
As implied by our lower bound results (Theorem 2 on page 7 and Theorem 5 in the appendix), the proposed algorithm has a regret upper bound that is tight in the gap $\Delta$ and $T$ up to logarithmic factors for both regret and query complexity. Nevertheless, we are not sure whether the dependence on other factors is tight.
Specifically, we are also not sure about the tightness of $\Upsilon$. In the current work, our focus is mainly on establishing bounds up to polynomial factors on the oracle's regret. However, we also emphasize that $\Upsilon$ is mild in most cases and usually scales like $O(\log T)$ or $O(\log|\mathcal{F}|)$ (see Examples 2 and 3). We believe that improving the algorithm to exhibit linear dependence on $\Upsilon$ is an interesting future research direction, and it seems to require non-trivial modification to the algorithm.
**3. On the expert policy $\pi_e$**
The expert policy $\pi_e$ can be any Markovian policy that maps from state to a distribution over actions and is unnecessarily maximizing the value function (i.e., the expert policy could be sub-optimal).
Regarding the notion of "surpassing" the expert policy, we can illustrate it by considering the average regret of imitation learning, which is defined as
$$
\mathrm{Regret}^{\mathrm{IL}}\_T := \frac{1}{T} \sum\_{t=1}^T \Big(V^{\pi\_e}\_0(x\_{t,0}) - V^{\pi\_t}\_0(x\_{t,0})\Big).
$$
Then, the regret upper bound in Theorem 4 can be translated into:
$$\mathrm{Regret}^{\mathrm{IL}}\_T\leq O\left(
H\sqrt{\frac{A\beta}{T}}
\right)
-\frac{\mathrm{Adv}\_T}{T}$$
where we have simplified it by ignoring the instance-dependent upper bound and logarithmic factors for clarity. Now, consider a case where $\max_a A^{\pi_e}_h(x, a) > \alpha_0>0$ for some constant $\alpha_0$ for all $x$ and $h$. This can happen when the expert policy is not optimal for every state. Consequently, we have $\mathrm{Adv}_T > \alpha_0 H T$. In this case, the aforementioned regret is further bounded by
$$\mathrm{Regret}^{\mathrm{IL}}\_T \leq O\left(
H\sqrt{\frac{A\beta}{T}}
\right)
-\alpha\_0 H$$
We note that, when $T\rightarrow\infty$, we have $\mathrm{Regret}^{\mathrm{IL}}_T\rightarrow-\alpha_0 H < 0$. This means that the best policy learned in $T$ rounds will eventually outperform (or surpass) the expert policy when $T$ is large enough.
We will add the above explanation to the next version to improve the clarity.
**4. realizable assumption for $f^\star$ / extending [Foster et al., ICML 2020] to the contextual dueling bandit**
It is a good point and an interesting question for future research. However, without a realizable assumption, we are not sure if our query complexity result still holds, although we expect that the worst-case regret bound will persist by suffering an additive term related to the model-misspecification error. We will mention it in the next version.
---
[1] Agarwal, Naman, Alon Gonen, and Elad Hazan. "Learning in non-convex games with an optimization oracle." *Conference on Learning Theory*. PMLR, 2019.
[2] Rakhlin, Alexander, and Karthik Sridharan. "Online non-parametric regression." *Conference on Learning Theory*. PMLR, 2014.
[3] Rakhlin, Alexander, and Karthik Sridharan. "Online nonparametric regression with general loss functions." *arXiv preprint arXiv:1501.06598* (2015).
[4] Saha, Aadirupa, and Akshay Krishnamurthy. "Efficient and optimal algorithms for contextual dueling bandits under realizability." International Conference on Algorithmic Learning Theory. PMLR, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I would like to keep the positive score for this paper. | Summary: The paper gives “best-of-both-worlds” results for an imitation-learning problem in contextual bandits and MDP settings. With small orthogonal changes to assumptions, the algorithms primarily improve over prior work by considering instance-optimal bounds both in regret and queries, and require only ordinal preference feedback rather than explicit rewards (similar to the “dueling bandits“ literature).
Strengths: - The paper is easy to read, the algorithms and notation are well-explained, and the results are appropriately contextualized in prior work.
- The examples given for the functions in the model are quite useful for grounding the problem in more concrete applications. Related work is discussed thoroughly.
- Conceptually, the model draws nice connections between contextual bandits and modern topics in finetuning models (e.g. LLMs) from preference feedback, where the emphasis on “instance-optimal” style results is particularly well-motivated.
Weaknesses: - While the application of techniques from online reinforcement learning to obtain the instance-optimal bounds in this setting is clever, it is unclear how much of this follows directly vs what technical innovation is required. It would be helpful to highlight the methodological contributions used.
- Given the applications discussed, it would be beneficial to give experimental results for preference finetuning (even in a toy setting) to demonstrate the importance of instance-optimality in practice.
- While the instance-optimal rates seem reasonable, it would be nice to include (partially) matching lower bounds for some results, or discuss barriers to obtaining such results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can the rates on $d$ or $\Delta$ be shown to be asymptotically tight for either queries or regret?
- What does the notation $P_t[a_t, b_t]$ on line 146 refer to?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: - Connections to prior RL work which makes use of eluder dimension could be discussed in greater detail.
- Some hyperlinks are broken in the PDF.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback! We would like to address your concerns as follows.
**1. Highlight of the methodological contributions used.**
We highlight some of the novelty and methodological contributions of the proposed algorithm below:
- Active learning via candidate arm set: while the concept of a candidate arm set is not new and was originally employed in the "active arm elimination" algorithm, it is important to highlight that the integration of the candidate arm set with the active learning condition ($Z_t=\mathbf{1}\\{|\mathcal{A}_t|>1\\}$) is new to the best of our knowledge.
- Best-of-both-worlds regret upper bound via well-designed query strategy: we carefully designed the query strategies for different situations (for $\lambda_t=0$ and $1$). Such a design leads to both worst-case upper bound and an instance-dependent upper bound that depends on the eluder dimension.
- The design of the estimated cumulative regret: the quantity $\sum_{s=1}^{t-1} Z_s w_s$ (Line 9, Algorithm 1) is an upper bound of regret that we have incurred up to round $t$, which, to the best of our knowledge, is a novel contribution in the context of adversarial bandits. This is beyond another design proposed in [1], which is limited to stochastic bandits.
**2. Experimental results for preference finetuning**
We also believe that the experimental results of the proposed algorithm are important. Since this work primarily focuses on theoretical aspects, we leave the empirical study to future work.
**3. Matching lower bounds / the asymptotical rates of $d$ and $\Delta$**
We established some lower-bound results (see Theorem 2 on page 7 and Theorem 5 in the appendix). In summary, these results demonstrate the following two key points:
(1) The worst-case regret is lower bounded by $\Omega(\sqrt{AT})$.
(2) Any algorithm achieving a regret upper bound of $O(\sqrt{AT})$ will inevitably have an instance-dependent regret lower bounded by $\Omega(A/\Delta)$ and a query complexity lower bounded by $\Omega(A/\Delta^2)$ and $\Omega(T)$.
Hence, the proposed regret upper bound exhibits a tight dependence on the gap $\Delta$ and $T$ up to logarithmic factors for both regret and query complexity. However, we are not sure if the dependence on $d$ is tight. Further improvement on either the upper bound or the lower bound is an interesting future direction.
**4. The meaning of $P_t[a_t,b_t]$ on line 146**
On line 146, we are making a comparison to [2], where the notation $P_t$ is introduced, representing the preference matrices in their paper. We will add this definition in the next version to enhance clarity.
**5. Connections to prior RL work which makes use of eluder dimension**
Thanks for pointing this out. We will incorporate a more comprehensive discussion on prior works which use the eluder dimension in the next version. Additionally, we highlight that our worst-case regret bound is independent of the eluder dimension, while most of the existing RL works that use the eluder dimension will have eluder dimension in their regret bound.
---
[1] Foster, Dylan, et al. "Instance-Dependent Complexity of Contextual Bandits and Reinforcement Learning: A Disagreement-Based Perspective." *Conference on Learning Theory*. PMLR, 2021.
[2] Saha, Aadirupa, and Akshay Krishnamurthy. "Efficient and optimal algorithms for contextual dueling bandits under realizability." *International Conference on Algorithmic Learning Theory*. PMLR, 2022. | Summary: This paper develops the provably efficient algorithms AURORA and AURORAE, which are able to achieve the optimal regret bound under contextual dueling bandit setting, and imitation learning respectively, at the same time minimizing query complexity. The key idea behind is that the algorithm only makes a query when the algorithm is very uncertain about the optimal action ($Z_t 1_{|A_t| > 1}$). The algorithm decides the sampling distribution of action pairs to make a query by considering whether the estimated cumulative regret exceeds the carefully designed threshold. If it does not exceed, the algorithm does exploration and sample action pairs from the uniform distribution. If it exceeds, the algorithm uses a technique similar to inverse gap weighting to achieve better balance between exploration and exploitation. For imitation setting with horizon H, the algorithm treats MDP as a concatenation of H contextual bandits and runs AURORAE, which is a stack of multiple AURORA instances.
Strengths: This work is original and well-motivated. It is crucial to design an online learning algorithm that achieves optimal regret while using minimal query complexity. Although I did not get a chance to read the complete proofs in the supplementary material carefully, given the discussion of intuition, all technical results seem reasonable to me.
This paper is well presented and is a pleasure to read. An example for illustration follows every definition. All materials are well organized in a logical manner.
Weaknesses: I have several concerns regarding the proposed algorithms. First, P5 I 5, the computational complexity for the candidate arm set might be very large, even if F is assumed to be a d-dimensional linear class. The computational complexity might be $O(dT\log(T)|A|)$. Also, in reality, F might be very complex, which might even worsen the computational complexity. Can we use a simple function class F for approximation while still achieving a similar regret bound?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please see the review in weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors address their limitations of not having any experiments on real data or simulations. I believe the work will be much more convincing if the theoretical bounds are supported by experiment results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback! We would like to address your concerns as follows.
**The computational complexity for the candidate arm set**
As mentioned by the reviewer, if $\mathcal{F}$ is a $d$-dimensional linear class, the computational complexity will be $\tilde{O}(d T A)$. We believe that this complexity is true and acceptable, considering that even the well-established linUCB algorithm [1] also exhibits a computational complexity of at least $\tilde{O}(dTA)$ (noting that for linUCB, the construction of the upper confidence bound at each round $t$ takes at least $\tilde{O}(dA)$ time).
With regard to the question "Can we use a simple function class F for approximation while still achieving a similar regret bound?", we apologize we do not fully understand it. Could you clarify what you meant by using a function class for approximation?
---
[1] Li, Lihong, et al. "A contextual-bandit approach to personalized news article recommendation." Proceedings of the 19th international conference on World wide web. 2010.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the response. Could you discuss more about how specific choices of the function class $\mathcal{F}$ affect the computational complexity and the regret bound?
---
Reply to Comment 1.1.1:
Comment: Thanks for your response. Below we discuss more about the effect of the choice of function class $\mathcal{F}$ on the computational complexity and the regret bound.
**How function class affects the computational complexity**
We observe that the computational complexity of the proposed algorithm mainly depends on the computation of the candidate arm set (Algorithm 1, Line 5) and the width (Line 8).
When $\mathcal{F}$ is a $d$-dimensional linear class, the computational complexity is $\tilde{O}(d T A)$ since the version space exhibits an ellipsoid structure where both the candidate arm and the width can be solved in $\tilde{O}(d A)$ time. When $\mathcal{F}$ is tabular, it can be considered as a special case of linear class with one-hot encoding. In this case, we have $d=S\times A$, resulting in a computational complexity of $\tilde{O}(TSA^2)$.
For a more general convex function class $\mathcal{F}$, we can design an efficient algorithm based on a weighted regression oracle for $\mathcal{F}$. We first note that previous work [1] has proposed a method to efficiently compute the width. Now we propose the following method to compute the candidate arm set. We first note that an arm $a$ belongs to the candidate arm set at round $t$ if and only if
$$
\min\_{f\in\mathcal{F},\\,\xi\in\mathbb{R}^A}
\\; 1
\quad\text{s.t.}\quad
f(x,a,a')=\xi\_{a'}\\;
,\quad
\xi\_{a'} > 0
\quad(\forall a'\neq a)
\quad\text{and}\quad
\sum\_{s=1}^{t-1} Z\_s\left(f(x\_s,a\_s,b\_s)-f\_t(x\_s,a\_s,b\_s)\right)^2 \leq \beta
$$
is feasible. Here we introduce the slack variable $\xi$ so that the optimization part for $f$ can be simply reduced to a weighted regression oracle. Next, we convert the above into Lagrangian formulation and obtain
$$
\begin{aligned}
&\min\_{f\in\mathcal{F},\\,\xi\in\mathbb{R}^A}
\max\_{\alpha\in\mathbb{R}\_+^A,\gamma\in\mathbb{R}\_+^A,\lambda\in\mathbb{R}\_+}
\\; 1 + \sum\_{a'\neq a}\alpha\_{a'}\big(f(x,a,a')-\xi\_{a'}\big)^2 - \sum\_{a'\neq a}\gamma\_{a'} \xi\_{a'}
+\lambda\left(\sum\_{s=1}^{t-1} Z\_s\left(f(x\_s,a\_s,b\_s)-f\_t(x\_s,a\_s,b\_s)\right)^2 - \beta\right)
\\\\
=&
\max\_{\alpha\in\mathbb{R}\_+^A,\gamma\in\mathbb{R}\_+^A,\lambda\in\mathbb{R}\_+}
\min\_{f\in\mathcal{F},\\,\xi\in\mathbb{R}^A}
\\; 1 + \sum\_{a'\neq a}\alpha\_{a'}\big(f(x,a,a')-\xi\_{a'}\big)^2 - \sum\_{a'\neq a}\gamma\_{a'} \xi\_{a'}
+\lambda\left(\sum\_{s=1}^{t-1} Z\_s\left(f(x\_s,a\_s,b\_s)-f\_t(x\_s,a\_s,b\_s)\right)^2 - \beta\right)
\end{aligned}
$$
Here we note that we can swap the min and max since the objective is convex in the joint space of $f$ and $\xi$. Then, the inner minimization problem can be solved by updating $f$ via the regression oracle and updating $\xi$ via gradient descent; for the outer maximization problem, we can do projected gradient ascent.
**How function class affects the regret bound**
The function class affects the regret bound in $\beta$ and the eluder dimension. We elaborate on them separately:
- $\beta$: for commonly used loss function (see Example 2 and 3), $\beta$ depends on the complexity of the class $\mathcal{F}$, as is standard in oracle-based bounds. However, this dependence typically only scales logarithmically with the size (or effective size) of $\mathcal{F}$ and is mild in many scenarios. Eg. for a finite class $\mathcal{F}$, $\beta$ depends on $\log |\mathcal{F}|$. Furthermore, when $\mathcal{F}$ is infinite, we can replace $|\mathcal{F}|$ by the covering number of $|\mathcal{F}|$ following the standard techniques. To give some concrete examples of infinite classes, for $d$-dimensional linear function class, $\beta$ will have a dependence of $O(d)$ (effective complexity of $\mathcal{F}$), and for the tabular class, $\beta$ will have a dependence of $O(SA^2)$.
- Eluder dimension: it is a standard complexity measure of function classes. For linear function class $\mathcal{F}$, it is typically bounded by $d$; for tabular function class, it is bounded by $SA^2$. More examples can be found in existing works on eluder dimension such as [2].
We will add these explanations to the revised version.
---
[1] Foster, Dylan, et al. "Practical contextual bandits with regression oracles." International Conference on Machine Learning. PMLR, 2018.
[2] Russo, Daniel, and Benjamin Van Roy. "Eluder dimension and the sample complexity of optimistic exploration." Advances in Neural Information Processing Systems 26 (2013). | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Large Language Models as Commonsense Knowledge for Large-Scale Task Planning | Accept (poster) | Summary: The paper proposes to use large language models (LLMs) as a world model instead of a policy for task planning. Specifically, the LLM is used to approximate the state of the world and acts as a heuristic policy in Monte-Carlo Tree Search (MCTS). Experimental results demonstrate the effectiveness of the method, outperforming a finetuned small model and a LLM policy.
Strengths: 1. The paper innovatively proposes to use language model as world model instead of policy model for task planning. The idea is quite interesting and the motivation is convincing, i.e., this can lower the complexity of the problem and better utilize the encoded commonsense knowledge in language models.
2. The experimental results demonstrates the effectiveness of the proposed method.
Weaknesses: 1. It is better to include more baselines that employ special design for task planning, e.g., SayCan [1], Zero-Shot Planner [2], etc.
2. Only simple settings are considered in this work, i.e., object re-arranging task in a household environment. I would be interested in seeing if the method will still work when applied to more complex environments, which contain more diverse objects making it harder to predict world state.
[1] Ahn et al. Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
[2] Huang et al. Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: LLM-MCTS retrieves in-context examplars from a dataset. Does the baseline model also do this?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Seems that I don't see a limitation section in the paper? Also, broader societal impacts are not included, which should typically be consider for a generation model like large language models. Please refer to weakness section for improvement.
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes
Flag For Ethics Review: ['No ethics review needed.'] | Rebuttal 1:
Rebuttal: Thank you very much for your valuable feedback. We will improve and revise the paper according to your suggestions. Our reply to your question is enclosed below.
Q1:
> It is better to include more baselines that employ special design for task planning, e.g., SayCan [1], Zero-Shot Planner [2], etc.
A1: The key idea of SayCan is to learn physical-level affordance from raw observation when executing physical actions. In high-level task planning, the affordance is determined by the precondition of the pre-defined actions. We are keen to compare with SayCan when combining our method with physical-level action policy. Our GPT-3.5 policy baseline is essentially [2] with a one-shot example prompt and observation feedback, as we want to provide the same information for a fair comparison.
Q2:
> Only simple settings are considered in this work, i.e., object re-arranging task in a household environment. I would be interested in seeing if the method will still work when applied to more complex environments, which contain more diverse objects making it harder to predict world state.
A2: Object rearrangement is a representative embodied AI task [3-9] with many practical implications in everyday life, such as setting the table, tidying up the room, loading the dishwasher, and more. Thus, object rearrangement experiments are an interesting setting to investigate a fairly large set of planning capabilities required in embodied AI. We chose VirtualHome, as it is an established domain well used in prior work [2,13,14]. Our method is domain-agnostic and should be able to generalize to other common household domains, as LLMs have vast general knowledge that should be widely applicable [10-12]. We will explore those distinct datasets for further evaluation.
Q3:
> LLM-MCTS retrieves in-context exemplars from a dataset. Does the baseline model also do this?
A3: Yes, the baseline GPT-3.5 policy has the same mechanism as the heuristic policy in GPT3.5-MCTS. GPT2 policy, however, is fine-tuned by the entire training dataset using behavior cloning.
Q4:
> Seems that I don't see a limitation section in the paper? Also, broader societal impacts are not included, which should typically be consider for a generation model like large language models.
A4: Due to the page limit, we briefly introduce the limitation in the failure analysis and conclusion. We will consider the potential broader impact and add it to the manuscript during revision.
[1] Ahn et al. Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
[2] Huang et al. Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
[3] B. Dhruv et al., “Rearrangement: A Challenge for Embodied AI.” 2020.
[4] L. Weihs et al. "Visual room rearrangement." CVPR 2021.
[5] A. Szot et al. "Habitat 2.0: Training home assistants to rearrange their habitat." NeurIPS 2021.
[6] Y. Kant et al. "Housekeep: Tidying virtual households using commonsense reasoning." ECCV 2022.
[7] A. Khandelwal et al. "Simple but effective: Clip embeddings for embodied ai." CVPR 2022.
[8] E. Huang et al., "Large-scale multi-object rearrangement." ICRA 2019.
[9] A. Krontiris et al. "Dealing with Difficult Instances of Object Rearrangement." RSS 2015.
[10] S. Bubeck et al. "Sparks of artificial general intelligence: Early experiments with gpt-4." arXiv preprint arXiv:2303.12712 (2023).
[11] T. Silver et al. Generalized Planning in PDDL Domains with Pretrained Large Language Models. arXiv preprint 2023.
[12] B. Liu et al. "Llm+ p: Empowering large language models with optimal planning proficiency." arXiv preprint arXiv:2304.11477 (2023).
[13] I. Singh et al., “ProgPrompt: Generating Situated Robot Task Plans using Large Language Models”, ICRA 2023.
[14] S. Li et al., “Pre-trained language models for interactive decision-making,” Neurips 2022. | Summary: This paper proposed to leverage LLMs both as a (commonsense) world model and heuristic policy within the MCTS search algorithm to tackle household planning tasks namely object rearrangements. The main idea is that for each simulation phase in MCTS, the algorithm sample from LLM to obtain the initial belief of states (of objects) and then use the LLM as a heuristic policy to guide action selection and finding promising trajectories.
[Some details of when and how often the LLM is used as a world model is missing from the main paper (see questions)]
The paper evaluated their approach using a subset of VirtualHome tasks– object rearrangement. They tested models on simple and compositional (rearranging multiple objects) tasks, as well as in-distribution and out-of-distribution settings. They use the `success rate’ of completing the tasks within 30 steps as evaluation metrics for comparison. They demonstrate significant improvements over baselines including a variant of the MCTS wo/ commonsense world model, a supervised GPT-2 model, and using GPT-3.5 only as the action policy. The improvements are larger for compositional and OOD setups where their method benefits from the MCTSC’s lookahead and LLM’s commonsense knowledge of the world.
Strengths: - Interesting/timely approach to leveraging LLM both as a world model and policy within the MCTS framework for the important task of planning
- Significant improvements over baselines
- Thorough and insightful ablation study to analyze the functioning of different components
Weaknesses: - The paper only tested on object rearrangement task with limited object relationships (on, inside). More complex and realistic tasks are left unexplored.
- The paper benefits from the rewriting of Sec 4.2 to add technical details. At its current state, it’s unclear how often the LLM is used as a world model (see questions).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1– From my understanding the LLM is only used once (per simulation) to get the initial state of the objects. How do you use that to estimate the value of selected actions (line 175)? Do you get different world states for different simulation iterations? is the commonsense model used at any other stage in the MCTS search algo? I think a more organized description of the process with a running example would be helpful.
2- During MCTS, to sample state from the belief, do you sample the position of all available objects or task-related objects?
3- Some notations are undefined in Algorithm 1: d, tau, d’, …
4- Why did the author limit themselves to only object rearrangement tasks with only a few object relationships (on, inside)? Did the author explore their method on other household or planning tasks?
5- For compositional tasks, are models provided with 1 few-shot compositional example or a simple example?
6- Line 291: how is the ground-truth reward function obtained? It's unclear how the ground-truth reward function is different from the one used in the proposed final model
7- An interesting observation is that sometimes GPT3.5 and MCTS outperform in the unseen apartment setup compared to the seen one (on some tasks). Do authors have any insight/speculation on this?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes, briefly in conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your valuable feedback. We will carefully consider and incorporate your comments and suggestions into our manuscript. The reply to the questions is enclosed below.
Q1:
> The paper only tested on object rearrangement tasks with limited object relationships (on, inside). More complex and realistic tasks are left unexplored.
A1: While our experiments focus on object rearrangement, we believe that it is appropriate as object rearrangement is a representative embodied AI task [1-7] with practical implications in everyday life, such as setting the table, tidying up the room, loading the dishwasher, and more. Thus, we believe that object rearrangement experiments reasonably support our claims in the paper.
Object relationships (on, inside) are only used to initialize the belief of the world. Using an imperfect but reasonable world model and object positioning is a trade-off between efficiency and accuracy. As the agent navigates and receives new observations, its beliefs of states and relationships of objects will be updated.
Q2:
> The paper benefits from the rewriting of Sec 4.2 to add technical details. At its current state, it’s unclear how often the LLM is used as a world model (see questions).
A2: We will revise Sec 4.2 according to your questions. Please see our responses below.
Q3:
> How do you use that to estimate the value of selected actions (line 175)? Do you get different world states for different simulation iterations?
A3: In MCTS, we sample a world for each simulation, and the sampled world could be different for each simulation. In one simulation, the agent selects actions and sample observations according to the world in the root until it reaches the leaf node of the tree for expansion and rollout. It results in a trajectory of the tree with a reward. The root will sample a different world in different simulations, resulting in different trajectories in the same tree. We back up all the rewards from all the obtained trajectories to get the approximated Q-function at the root (Alg1, line 34 recursively updates to the root). The original paper on MCTS for the POMDP [11] may provide additional understanding.
> is the commonsense model used at any other stage in the MCTS search algo?
The LLM is used not only in sampling possible worlds but also as a search heuristic (in Alg1, line 29, LLM is used as heuristic policy: $\hat{\pi}(a|h)$).
Q4:
> do you sample the position of all available objects or task-related objects?
A4: We sample all available objects of the world.
Q5:
> Some notations are undefined in Algorithm 1: d, tau, d’, …
A5: $d$ and $d’$ is the current depth of the tree, and $\mathcal{T}$ denotes the current tree. We will explain the notations in future revisions.
Q6:
> Why did the author limit themselves to only object rearrangement tasks with only a few object relationships (on, inside)? Did the author explore their method on other household or planning tasks?
A6: See our response A1.
Q7:
> For compositional tasks, are models provided with 1 few-shot compositional example or a simple example?
A7: The semantic similarity determines the selection of examples. If the cosine similarity between the embeddings of current instruction and instruction in the dataset is higher than the others, the instruction in the dataset and its corresponding trajectory are then selected as the example. The dataset contains examples of compositional tasks; thus, the compositional example will likely be selected if the current task is compositional.
Q8:
> Line 291: how is the ground-truth reward function obtained? It's unclear how the ground-truth reward function is different from the one used in the proposed final model
A8: We assume that the baseline UCT method has the ground-truth reward function. Otherwise, it will not be able to plan. For each task, we have the ground-truth goal, which we use to determine the reward in the UCT baseline. The proposed method uses LLM to interpret the natural language instructions into the goal state, and the goal state determines the reward function, i.e., there will be a positive reward if the goal is achieved.
Q9:
> An interesting observation is that sometimes GPT3.5 and MCTS outperform in the unseen apartment setup compared to the seen one (on some tasks). Do authors have any insight/speculation on this?
A9: Some unseen domains are not as large as the example domains. There might be some variance due to the uncertainty of sampling in both the GPT3.5 and MCTS. Occasionally, it has a higher success rate when the actual performance difference is small.
[1] B. Dhruv et al., “Rearrangement: A Challenge for Embodied AI.” 2020.
[2] L. Weihs et al. "Visual room rearrangement." CVPR 2021.
[3] A. Szot et al. "Habitat 2.0: Training home assistants to rearrange their habitat." NeurIPS 2021.
[4] Y. Kant et al. "Housekeep: Tidying virtual households using commonsense reasoning." ECCV 2022.
[5] A. Khandelwal et al. "Simple but effective: Clip embeddings for embodied ai." CVPR 2022.
[6] E. Huang et al., "Large-scale multi-object rearrangement." ICRA 2019.
[7] A. Krontiris et al. "Dealing with Difficult Instances of Object Rearrangement." RSS 2015.
[8] S. Bubeck et al. "Sparks of artificial general intelligence: Early experiments with gpt-4." 2023.
[9] T. Silver et al. Generalized Planning in PDDL Domains with Pretrained Large Language Models. 2023.
[10] B. Liu et al. Llm+ p: Empowering large language models with optimal planning proficiency. 2023.
[11] D. Silver et al. Monte-Carlo planning in large POMDPs. NeurIPS, 2010.
[12] I. Singh et al. ProgPrompt: Generating Situated Robot Task Plans using Large Language Models. ICRA 2023.
[13] S. Li et al. Pre-trained language models for interactive decision-making. Neurips 2022.
[14] W. Huang et al. Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents, ICML 2022.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for your detailed responses to clarify details. I believe including these in the paper will greatly improve the readability of the paper.
Despite some missing details which their inclusion will greatly improve readability, this is a solid paper and I am leaning towards positive recommendation.
---
Reply to Comment 1.1.1:
Comment: Thank you for offering your valuable suggestions! They have helped us significantly in improving the manuscript. | Summary: The paper introduces a new methodology _Monte Carlo planning with common sense knowledge_. The idea is to rely on LLMs to integrate common background knowledge into Monte Carlo Tree Search algorithm with application to language-instructed object rearrangement tasks.
Assuming access to a dataset of expert actions and observations, a list of all objects, containers and surfaces appearing in the dataset are retrieved. Similar to [S.Li 2022](https://arxiv.org/pdf/2202.01771.pdf) an LLM is used to approximate the belief of the state, containing the list of objects and their relationship. For instance, the fridge is likely to be in the kitchen.
To derive a policy, an LLM is also used, building on the work of [S.Li 2022](https://arxiv.org/pdf/2202.01771.pdf). Relying on PUCT, the model takes as input the examples in the dataset, the goal description, the current observations and the history of actions and outputs the list of next actions to take.
The method developed is evaluated on the Virtual Home, where the task is to rearrange objects in different apartments. The complexity of the task depends on the novelty of the apartment and the object considered (could be observed or not in the training data). The benchmark includes four methodologies, UCT, two baselines based on [S.Li 2022](https://arxiv.org/pdf/2202.01771.pdf) and the proposed methodology. The results demonstrate the improvement induced by _Monte Carlo planning with commonsense knowledge_ independently of the complexity of the task. An ablation study demonstrates the improvement induced by the initial approximation of the belief of the state. Finally an analysis of failures is conducted and demonstrates that most of them are linked to inadmissible actions outputted by the LLMs and back-and-forth behavior.
Strengths: The work builds on [S.Li 2022](https://arxiv.org/pdf/2202.01771.pdf), relying on an LLM to approximate the belief of the state and to derive a policy. The novelty is to rely on MCTS and PUCT to derive the policy instead of DT. The approach is evaluated on a broader task than [S.Li 2022](https://arxiv.org/pdf/2202.01771.pdf), including compositional task that are supposed to be more complex than simple task to solve.
Finally, the ablation study provides insights on the added value of each block of the proposed methodology.
Weaknesses: The paper mostly relies on experiments and ideas from [S.Li 2022](https://arxiv.org/pdf/2202.01771.pdf) and would gain clarity if clearly stated. For instance, the use of LLMs to initialize the belief state was already used in [S.Li 2022](https://arxiv.org/pdf/2202.01771.pdf).
Additionally, the code is not public and the implementation of LLM_MCTS is not super clear form pseudo code available in _Algorithm 1_. The paper would gain clarity by making the code public or available in supplementary materials.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: The parameters $c_{puct}$ and $c_{uct}$ can have a huge impact on the performance of the algorithm. How did you select the parameters?
PUCT can struggle with large state spaces, did it motivate the choice to keep only 2 types of relationship instead of 59 proposed in the task. Getting insights on the potential drop in performance induced by an increase in the size of the state space would be interesting.
200 expert trajectories are used in the training set. Having an idea of the impact of the number of trajectories on the performance of the methodology could be of great interest.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The two main limitations of the methodology are the non admissible outputs produced by the model and the back and forth behavior. The two were acknowledged by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your effort in reviewing our paper and providing feedback. Please see our responses below.
Q1:
> The paper mostly relies on experiments and ideas from S.Li 2022. and would gain clarity if clearly stated. For instance, using LLMs to initialize the belief state was already used in S.Li 2022.
A1: We believe that the reviewer may have missed out on our paper's main contribution and misunderstood the method of Li et al. [2]:
* While our research mainly contributes to using GPT-3.5 as a commonsense world model for **model-based search** with MCTS, Li et al. [2] use behavior cloning to finetune GPT-2 as a **model-free policy**. There are fundamental differences between the model-based and the model-free approaches, conceptually and algorithmically. In particular, the power of the model-based approach lies in its ability to compose elementary pieces of (commonsense) world knowledge through a reasoning/planning procedure (MCTS, in our case) and achieve compositional generalization. This is often one key weakness of the model-free approach.
* Li et al.'s approach [2], by its nature, would not employ belief states in planning, contradicting RKS5's assertion.
* Li et al. [2] only mentioned belief states in expert data collection (Appendix E.1.). It clearly states that they use the code of [1] for implementation, in which the belief of the initial state is a uniform distribution rather than initialized by LLM.
We hope that our response can clarify the misunderstanding.
Q2:
> To derive a policy, an LLM is also used, building on the work of S.Li 2022.
A2: Our heuristic policy uses LLM but is not built on Li et al. [2] as we do not finetune the LLM using behavior cloning.
Q3:
> The benchmark includes…, two baselines based on S.Li 2022 ...
A3: We only use one baseline from Li et al [2], i.e., finetuned GPT-2 policy using behavior cloning. Other baselines are UCT and the few-shot GPT3.5 policy adapted from [3]. Please take a look at [3], as it is substantially different from [2] as it does not fine-tune the LLM, but uses prompts to conduct few-shot/zero-shot planning.
Q4:
> Additionally, the code is not public and the implementation of LLM_MCTS is not super clear form pseudo code available in Algorithm 1. The paper would gain clarity by making the code public or available in supplementary materials.
A4: We intend to make the code publicly available for the camera-ready version of the paper. In the meantime, we provide some implementation details here. Our dataset generation is adapted from the code in the paper [1]. The fine-tuned GPT2 and few-shot GPT3.5 policies are from [2] and [3]. Our UCT and LLM-MCTS implementations are adapted from [4].
Q5:
> The parameters $c_{puct}$ and $c_{uct}$ can have a huge impact on the performance of the algorithm. How did you select the parameters?
A5: We do limited tuning of the parameters within a range, initially testing UCT and LLM-MCTS in a small domain to verify correctness. After efficiently solving the problem in this small domain, we applied the method to a larger domain, increasing and continually tuning parameters to balance the exploration and exploitation. Lack of access to OpenAI GPT-3.5 and time constraints hindered more advanced tuning methods like Bayesian Optimization, leaving potential room for performance improvement.
Q6:
> PUCT can struggle with large state spaces, did it motivate the choice to keep only 2 types of relationship instead of 59 proposed in the task. Getting insights on the potential drop in performance induced by an increase in the size of the state space would be interesting.
A6: The larger belief space could compromise the performance. Thus, we didn’t consider all 59 relationships in belief initialization and made a trade-off between efficiency and accuracy. Investigating the potential drop in performance as more relationships are used would be interesting.
Q7:
> 200 expert trajectories are used in the training set. Having an idea of the impact of the number of trajectories on the performance of the methodology could be of great interest.
A7: This will certainly affect the results. However, 200 expert trajectories, compared to Li et al. [2], are already significantly smaller. And our performance is considerably better. We are happy to conduct additional experiments in the revisions to enrich our conclusions.
Q8:
> The two main limitations of the methodology are the non admissible outputs produced by the model and the back and forth behavior. The two were acknowledged by the authors.
A8: We clarify that these are two cases of actions generated by the heuristic policy, which is caused by LLM policy errors that are similar to hallucinations. We admit that it affects the performance of the MCTS, but it is not LLM-MCTS that generates back-and-forth behaviors and non-admissible outputs. We will revise this part of the paper to make it clear.
Our contribution is to improve decision-making. We leverage MCTS to base decisions on LLMs' world model knowledge rather than fully rely on the LLM policy. The world model of LLM could also be incorrect but is updated with observations as the agent takes action in the real world, making it more accurate over time. In addition, MCTS looks ahead multiple steps, allowing it to correct some of the errors in the set of actions proposed by the LLM during the search process. This is the reason why we outperform the GPT3.5 policy.
[1] X. Puig et al, “Watch-and-help: A challenge for social perception and human-ai collaboration,” ICLR 2021.
[2] S. Li et al., “Pre-trained language models for interactive decision-making,” Neurips 2022.
[3] W. Huang et al. "Language models as zero-shot planners: Extracting actionable knowledge for embodied agents." ICML 2022.
[4] Jang, Youngsoo, et al. "Monte-carlo planning and learning with language action value estimates." ICLR. 2020.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for providing a detailed answer.
- Adding the code in the additional material would have been helpful to better understand the way you implemented the proposed methodology. I think it is problematic not to release it during review.
- As acknowledged by the authors, the methodology doesn't scale to a large belief state. Additional experiments with more relationships would be important to highlight these limitations. As of now, I think this doesn't make it unsuitable for real world applications.
Given the two limitations, I don't consider currently increasing the score. Making the code available for review and adding experiments with larger belief states could change the current score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply.
> I think it is problematic not to release it during review.
As for the code, we have sent the code to the AC via a private message, which should be available after AC puts it up.
> As acknowledged by the authors, the methodology doesn't scale to a large belief state. Additional experiments with more relationships would be important to highlight these limitations.
* Review misunderstood our point. We acknowledge that PUCT struggles with large belief space (in Table 2, LLM-MCTS Uniform State Prior). However, the reviewer seemed to miss out on the fact that we use LLM to provide a prior that effectively narrows down the belief space for search in a large domain, which is reflected in our ablation study (Table 2). This is a key contribution of our work that is clearly claimed in the paper.
* We wish to clarify that the object relationships (on, inside) are only used to **initialize** the world's belief. Using an imperfect but reasonable initial state belief is a trade-off between efficiency and accuracy. In addition, as the agent navigates and receives new observations, its beliefs of states and relationships of objects will be updated. Our experimental result suggests that the trade-off is sufficient.
* Besides, this trade-off is also well applied in prior work [1], which is also used by Li et al. [2] in data collection. Even they do not use LLM to initialize the belief.
> Given the two limitations, I don't consider currently increasing the score. Making the code available for review and adding experiments with larger belief states could change the current score.
If the additional experiment is your **primary concern** that leads to strong rejection rather than an interesting point, clearly stating it in your initial review comment would be helpful and constructive. We cannot finish the experiment in such a short period near the discussion deadline. We will add the additional experiments in the revisions. Our initial plan to make the code available for the camera-ready version is also valid according to NeurIPS policy. If you find any part of our presentation regarding the technical details unclear, please state it.
We wish to know whether our response clarifies the other issues you raised.
[1] X. Puig et al, “Watch-and-help: A challenge for social perception and human-ai collaboration,” ICLR 2021.
[2] S. Li et al., “Pre-trained language models for interactive decision-making,” Neurips 2022. | Summary: This work demonstrates that LLMs can be used as the commonsense models of the world and serve as the heuristic policy in search algorithms. Specifically this paper uses Monte Carlo Tree search to explore word states sampled from the output of LLMs and commonsense policy from LLMs can effectively guide the search, which reduces the search complexity. Experimental results on daily planning tasks again verify the advantages over using LLMs solely as policies.
Strengths: a). Novel idea by incorporating commonsense knowledge from LLMs into search algorithms instead of LLMs solely as policies. This work also points out that doing the search with the help of LLMs as a model may be better and more efficient than using LLMs directly as a policy.
b). Comprehensive evaluation of the proposed methods including simple tasks, compositional tasks or in-distribution, out-of-distribution tasks.
c). Good insights that improvements might come from the MCTS’s look-ahead search and more explorations of other possible search directions. Overall this work might motivate more research utilizing LLM’s world knowledge for decision-making problems.
Weaknesses: a). This work argues that planning policy may suffer from hallucination issues of LLMs in the related work. However this work did not justify how their proposed methods can help relieve the issues. It can add some context to discuss this.
b). As discussed in the paper, the proposed method might face the efficiency problem.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: (a) Just curious about why the baseline of UCT achieves zero success rate in Table 1 and can you explain more about this? Due to huge search space, UCT might fail to solve in a given time limit. But are there any improved search methods that can achieve better success rates, i.e., stronger baselines?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: n.a.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your valuable feedback. We will carefully revise our paper to incorporate your suggestions. The following are our responses to your questions.
Q1:
> This work argues that planning policy may suffer from hallucination issues of LLMs in the related work. However this work did not justify how their proposed methods can help relieve the issues. It can add some context to discuss this.
A1: We leverage MCTS for model-based reasoning, thus basing decisions on LLMs' world model knowledge rather than being fully dependent on the policy. Assuming that the world model is correct, MCTS then deduces the decision through search. In this sense, our approach improves decision-making over incorrect predictions that are similar to hallucinations.
The world model of LLM could also suffer from hallucination, but is updated with observations as the agent takes action in the real world, making it more accurate over time. In addition, MCTS looks ahead multiple steps, allowing it to correct some of the hallucinations in the set of actions proposed by the LLM during the search process.
We will revise the manuscript and add this discussion.
Q2:
> As discussed in the paper, the proposed method might face the efficiency problem.
A2: Our study shows the trade-off between accuracy and efficiency and explores the feasibility of our approach. See the full answer in the **runtime performance** of our global responses.
Q3:
> Just curious about why the baseline of UCT achieves zero success rate in Table 1 and can you explain more about this? Due to the huge search space, UCT might fail to solve in a given time limit. But are there any improved search methods that can achieve better success rates, i.e., stronger baselines?
A3: This intractability stems from the large state and action spaces coupled with sparse rewards. These factors result in a wide and deep search tree, with the size growing exponentially and leading to intractable planning. Even though MCTS [4] is among the top online planners in POMDP, it suffers from these complexities, much like other SOTA model-based methods such as DESPOT [5], which we also tried. Due to time constraints, we are unable to include all methods in our formal experiments.
[1] C.-Y. Hsieh et al. "Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes." 2023.
[2] C. Liang et al. "Less is more: Task-aware layer-wise distillation for language model compression." ICML 2023.
[3] K. Shridhar et al. "Distilling reasoning capabilities into smaller language models." ACL 2023.
[4] D. Silver et al. "Monte-Carlo planning in large POMDPs." NeurIPS, 2010. Link: https://proceedings.neurips.cc/paper_files/paper/2010/file/edfbe1afcf9246bb0d40eb4d8027d90f-Paper.pdf
[5] S. Adhiraj et al. "DESPOT: Online POMDP planning with regularization." NeurIPS, 2013. Link: https://proceedings.neurips.cc/paper/2013/file/c2aee86157b4a40b78132f1e71a9e6f1-Paper.pdf | Rebuttal 1:
Rebuttal: # Global response
We thank all the reviewers' efforts invested in reviewing our work and providing valuable feedback. We summarize the main concerns raised by reviewers and our corresponding responses.
One review (RKS5) states that
> The paper mostly relies on experiments and ideas from S.Li 2022. and would gain clarity if clearly stated. For instance, using LLMs to initialize the belief state was already used in S.Li 2022.
We believe this is a severe misunderstanding and mischaracterization of our method. Our research utilizes GPT-3.5 as a commonsense world model for **model-based search** with MCTS, while Li et al. [1] use behavior cloning to finetune GPT-2 as a **model-free policy**. There are fundamental differences between the model-based and the model-free approaches, conceptually and algorithmically. In particular, the power of the model-based approach lies in its ability to compose elementary pieces of (commonsense) world knowledge through a reasoning/planning procedure (MCTS, in our case) and achieve compositional generalization. This is often one key weakness of the model-free approach. We made this argument in the introduction as well as in Sect 5.3. We will try to further clarify this issue during the revision.
Further, unlike what the review (RKS5) asserts, Li et al.'s approach [1], by its nature, does not track belief states.
In responding to the main weaknesses raised by the reviewers, we appreciate the opportunity to address the concerns:
* **The domain and task of the experiments**: Reviewers noted that our experiments are restricted to object rearrangement in the VirtualHome simulator. Object rearrangement is a representative embodied AI task [3-9] with many practical implications in everyday life, such as setting the table, tidying up the room, loading the dishwasher, and more. Thus, object rearrangement experiments are an interesting setting to investigate a fairly large set of planning capabilities required in embodied AI. We chose VirtualHome, as it is an established domain well used in prior work [1,2,18,19]. Our method is domain-agnostic and should be able to generalize to other common household domains, as LLMs have vast general knowledge that should be widely applicable [15-17]. We will explore those distinct datasets for further evaluation.
* **Runtime performance**: There is a trade-off between accuracy and computational efficiency. While our method requires multiple LLM calls, it provides substantially improved results (Table 1). One objective of our study is to identify this possibility and highlight the trade-off. There are also various ways to enhance runtime performance, such as using a smaller LLM like Llama [10,11] or distilling domain knowledge into a smaller model [12-14]. We are keen to explore these avenues in future research.
We thank the reviewers for their constructive criticism, and we hope our rebuttal has clarified the issues.
[1] S. Li et al., “Pre-trained language models for interactive decision-making,” NeurIPS 2022.
[2] X. Puig et al., “Watch-and-help: A challenge for social perception and human-ai collaboration,” ICLR 2021.
[3] B. Dhruv et al., “Rearrangement: A Challenge for Embodied AI.” 2020.
[4] L. Weihs et al. "Visual room rearrangement." CVPR 2021.
[5] A. Szot et al. "Habitat 2.0: Training home assistants to rearrange their habitat." NeurIPS 2021.
[6] Y. Kant et al. "Housekeep: Tidying virtual households using commonsense reasoning." ECCV 2022.
[7] A. Khandelwal et al. "Simple but effective: Clip embeddings for embodied ai." CVPR 2022.
[8] E. Huang et al., "Large-scale multi-object rearrangement." ICRA 2019.
[9] A. Krontiris et al. "Dealing with Difficult Instances of Object Rearrangement." RSS 2015.
[10] H. Touvron et al. "Llama: Open and efficient foundation language models." 2023.
[11] H. Touvron et al. "Llama 2: Open foundation and fine-tuned chat models." 2023.
[12] C.-Y. Hsieh et al. "Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes." 2023.
[13] C. Liang et al. "Less is more: Task-aware layer-wise distillation for language model compression." ICML 2023.
[14] K. Shridhar et al. "Distilling reasoning capabilities into smaller language models." ACL 2023.
[15] S. Bubeck et al. "Sparks of artificial general intelligence: Early experiments with gpt-4." 2023.
[16] T. Silver et al. Generalized Planning in PDDL Domains with Pretrained Large Language Models. 2023.
[17] B. Liu et al. "Llm+ p: Empowering large language models with optimal planning proficiency." 2023.
[18] I. Singh et al., “ProgPrompt: Generating Situated Robot Task Plans using Large Language Models”, ICRA 2023.
[19] Huang et al. Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents, ICML 2022. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces a technique to incorporate large language models' commonsense knowledge into Monte Carlo Tree Search to guide planning. It uses LLMs to obtain probabilities over the initial belief of the state, and as a heuristic policy to guide simulation. Evaluation on household object rearrangement tasks in VirtualHomes demonstrates that the method is empirically more effective than search algorithms that don’t incorporate LLMs (UCT), or just having LLMs generate a policy through direct prompting (GPT3.5 Policy). The paper also makes a theoretical argument in favor of using model-based methods rather than model-free methods with LLMs.
Strengths: 1. Novel approach to planning with LLMs that doesn’t just entail having the LLM directly generate a policy, but as part of a search algorithm
2. Technique significantly outperforms both vanilla search algorithms and using LLMs directly to generate policies on household object rearrangement tasks
3. Ablation study and analysis give good insight into what helps, and where there is still room for improvement
Weaknesses: 1. The paper evaluated this technique in only occurred in one domain -- VirtualHome -- and it is unclear whether it can generalize to other embodied domains.
2. The main barrier to using this approach in practice is that it entails making multiple calls to GPT3.5 when computing each action, alongside having to do additional search on top -- making it less efficient than both vanilla search and vanilla LLM generation methods. Though computational expense was noted as a limitation in the conclusion, it may also be valuable to additionally report runtimes in the paper
3. The theoretical arguments for “knowledge of LLMs regarding states of world is more complete than their knowledge of policies for accomplishing daily tasks” (L58-59, Section 5.3), whereby description length of policies vs. world models is used to justify this claim, is not entirely convincing to me. First, it is unclear whether the argument the authors put forth that “learning about the world would likely require less training data than learning policies to do all tasks in the domain” (L62) applies to LLMs, which have been trained on an abundance of data and are not at all limited by data scarcity. This argument also makes assumptions about the way LLMs represent policies vs. models which seem not at all obvious to me — why is description length the best way to characterize LLM knowledge? LLMs are not trained to be efficient compressors, they are trained to imitate the training set.
* Is there a way to empirically validate this claim in particular?
---
Missing citations:
- https://aclanthology.org/2022.acl-long.120/
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Instead of performing similarity search, can you just constrain generation and/or prompt the model explicitly to choose amongst a limited set of available actions/object? This may help avoid translation errors.
2. Can you clarify the connection between LLM knowledge and description lengths?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Some limitations were mentioned in conclusion, though there is no explicit limitations section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you very much for your valuable feedback. We are grateful for the many suggestions for improvement, which we will incorporate in the revised manuscript. We would like to further clarify the questions and concerns you raised.
Q1:
> The paper evaluated this technique in only occurred in one domain -- VirtualHome -- and it is unclear whether it can generalize to other embodied domains.
A1: We chose VirtualHome, as it is an established domain well used in prior work [1-3]. Our method is domain-agnostic and should be able to generalize to other common household domains, as LLMs have vast general knowledge that should be widely applicable [4-6]. We will explore those distinct datasets for further evaluation.
Q2:
> The main barrier to using this approach in practice is that it entails making multiple calls to GPT3.5 when computing each action, alongside having to do additional search on top -- making it less efficient than both vanilla search and vanilla LLM generation methods.
A2: There is a trade-off between accuracy and computational efficiency. While our method requires multiple LLM calls, it provides substantially improved results (Table 1). One objective of our study is to identify this possibility and highlight the trade-off. There are also various ways to enhance runtime performance, such as using a smaller LLM like Llama or distilling domain knowledge into a smaller model [7-9]. We are keen to explore these avenues in future research.
Q3:
> it may also be valuable to additionally report runtimes in the paper
A3: The actual runtime to finish MCTS and make one decision depends on the number of simulations and internet connection latency. We used 100 times simulation during the tree search for GPT3.5-MCTS in our experiments, and it takes 1 to 2 minutes on average to make one decision. We will report the details in the paper and appendix in the final version.
Q4:
> Can you just constrain generation and/or prompt the model explicitly to choose amongst a limited set of available actions/object? This may help avoid translation errors.
A4: We have tried various methods to restrict the action generation. We have put the list of pre-defined actions and observed object list in the prompt. However, there are still remaining errors in LLM’s policy, such as opening the fridge when it is still far away. This is likely because the LLM cannot guarantee the effective use of all the information and preconditions in the prompt when making decisions.
Q5:
> Can you clarify the connection between LLM knowledge and description lengths?
A5: The assumption that LLM's training data is unlimited and can cover all domains and tasks is something we respectfully disagree with. While LLM is trained on vast datasets encompassing many disciplines, it's not guaranteed to cover all situations in all possible tasks. An extreme but representative example is large-number multiplication. If we wish to learn the policy, we need to have a dataset that covers all possible multiplications of two numbers and remember the results. It is impossible to have such a dataset as there are an infinite number of cases. On the other hand, describing a world model (0-9 digits and multiplication rule) is much simpler. Therefore, LLM does have data scarcity issues for learning the policy of some tasks.
The description length, reflecting the space complexity to represent full knowledge, determines the data needed to cover situations in a task or domain. It is often used in learning theory to analyze sample complexity, e.g., see Chapter 2 of Kearns and Vazirani (link: https://www.cis.upenn.edu/~mkearns/teaching/CIS625/KearnsVaziraniChapter2.pdf). Our analysis in the paper shows that the world model may require far less training data to learn than the policy in some situations, and our experiments show that this may happen in common household tasks. LLM essentially imitates the dataset. Thus, when the data is limited, the LLM’s knowledge of the world model is likely more complete than the policy.
[1] I. Singh et al., “ProgPrompt: Generating Situated Robot Task Plans using Large Language Models”, ICRA 2023.
[2] S. Li et al., “Pre-trained language models for interactive decision-making,” Neurips 2022.
[3] Huang et al. Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents, ICML 2022.
[4] T. Silver et al. Generalized Planning in PDDL Domains with Pretrained Large Language Models. 2023.
[5] B. Liu et al. "Llm+ p: Empowering large language models with optimal planning proficiency." 2023.
[6] S. Bubeck et al. "Sparks of artificial general intelligence: Early experiments with gpt-4." 2023.
[7] C.-Y. Hsieh et al. "Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes." 2023.
[8] C. Liang et al. "Less is more: Task-aware layer-wise distillation for language model compression." ICML 2023.
[9] K. Shridhar et al. "Distilling reasoning capabilities into smaller language models." ACL 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I believe the authors have adequately addressed my concerns regarding generalizability of the method and efficiency, and I believe the paper will be stronger with results on other household datasets and runtimes being reported.
I also appreciate the authors detailed response on their description length argument, which has been very clarifying. While I buy the argument for the example(s) presented in the paper and find the assertion that "world models are easier to learn than policies" makes intuitive sense, I believe it would still be good to ground the theoretical argument a little more in experiments, especially as it seems to be more of an illustrative example (which makes certain assumptions) than a formal proof. At the very least, it would be good to affirm that GPT3.5 truly has stronger priors over household room-object-container relations vs. household task policies. Overall, I believe this would be a very interesting argument to make with important implications for this LM + decision-making area of research, which is why it would be good to consolidate it even more with further evidence.
However, given that this is not the central contribution of the paper, I do not believe it necessarily affects my overall recommendation, which still leans positive.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply! Your suggestions will greatly strengthen our work and improve our manuscript's presentation. We have added the missing citation in our related work as well.
---
Reply to Comment 1.1.2:
Comment: Thank you very much for your suggestion to ground the statement "LLM has more comprehensive knowledge about world modeling than policy" more firmly in experimental results. We conducted further experiments accordingly.
We conducted experiments about planning for air travel from a starting city to a destination city, which we analyzed in our introduction. We utilized GPT-3.5 to generate flight paths between cities. We compare it to the GPT-3.5 model-based approach: we use GPT-3.5 to predict neighboring cities connected by a direct flight, which feeds into the uniform-cost search (i.e., replace node expansion by GPT-3.5 as the world model).
We use the data from the Kaggle World cities database, select 62 cities with populations exceeding 5 million in different countries, and use the Virtual Radar Server to get the flight routes dataset as ground truth. In our tests, we sampled 200 city pairs, evaluating path accuracy by verifying each direct flight exists. Paths were accepted if all flights were valid, even if they extended beyond our selected 62 source and target cities.
The preliminary result suggests that 50.4% of the paths predicted by the GPT-3.5 policy are correct, while the GPT-3.5 world model + shortest path algorithm achieved 63.5%. Our findings reasonably supplement that LLMs’ knowledge exhibits more comprehensiveness for world modeling than policy. It will serve as a side finding to supplement our central argument.
Due to time constraints, we can only finish these experiments within a very short period. We will add this result and further experiments to the appendix in the final manuscript if accepted. | null | null | null | null | null | null |
Action Inference by Maximising Evidence: Zero-Shot Imitation from Observation with World Models | Accept (poster) | Summary: This paper considers the setting where we have access to a dataset containing states and actions for pre-training and must then learn to solve a task given a new observation-only dataset from a downstream task. The key assumption here is that we have access to this action labelled dataset, and once we have it, we are better off training a world model than simply learning to label the new observation only trajectories with action labels (a la BCO and subsequently VPT). The proposed method is one of the first (as far as I know) to use forward dynamics models to implicitly model the action distribution and provides a nice alternative to IDM approaches for what is becoming an increasingly relevant problem setting. Given the relevance of the topic area, and the fact this exact thing has not been done before, I am voting for the paper to be accepted. My score would be increased if some additional experiments could be conducted, since these particular ones are relatively similar and also low impact in terms of their ambition.
Strengths: The strengths of this work are clear, it is a very relevant problem setting and this method is distinct vs. previous methods like BCO which rely on an IDM. In reality, this paper is essentially "Implicit action learning with forward dynamics models", and that has not been done before as far as I am aware. The method itself is fairly clearly presented, and the experiments are relatively clear with sufficient ablation studies. Finally, it is great to see limitations adequately discussed in the main body, which is surprisingly rare.
Weaknesses: Note that I have voted to accept, the following comments are not red flags but would likely improve the paper, and maybe make it possible to increase to a higher score.
1. The experiments are fairly mundane, and while scientific best practices appear to have been followed, there is a huge gap between what the authors claim to be working towards ("a single foundational world model") and what is actually shown (two DMC environments). It would be fantastic to see an example of this method at larger scale, even if the results are not state-of-the-art and there is only a single seed, for compute reasons. For example, this could be done using the dataset from VPT. The Minecraft images could be resized to make them smaller, and then it would be possible to use the DreamerV3 codebase (which was tested on MineCraft and runs on a single GPU) then see if it is possible to learn from the unlabelled MineCraft videos. If this works, it would drastically increase the impact of the paper, beyond being something mildly interesting for people who care about this specific topic, to something that catches people's eye across the field.
2. It seems slightly fishy to use a world model generated dataset to compare a world model based method and an IDM approach. It is possible there is some bias in the Dreamer or P2E data that makes it easier for a more similar approach to do well. Given how brittle many of these methods are, this could make a difference. Would it be possible to instead consider some open-source benchmarks such as VD4RL (Lu et al), which would make it "fair" across different approaches? For example you could use the random or mixed datasets for the embodied and then expert for the demonstration.
3. All of these experiments are within the same single environment, with 3 different reward functions. There is no variability in terms of the dynamics or observation space. To take a tiny step towards a "foundational world model" could you consider some variation, such as using the distracting control suite or varied dynamics in the simulator? This is also available in VD4RL so could be used there too. My guess is the world model approach would actually do better and it would make the results more interesting.
4. This is a very active area of research so the citations seem light, for example:
- Edwards et al. "Imitating latent policies from observation". ICML 2019
- Seo et al. "Reinforcement learning with action-free pre-training from videos". ICML 2022
- Schmeckpeper et al. "Reinforcement learning with videos: Combining offline observations with interaction". CoRL 2020
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How was the IDM baseline tuned? For example, in the VPT paper they mention different architecture choices made a big difference for performance.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are discussed thoroughly in the main body.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The experiments are fairly mundane. It would be fantastic to see an example of this method at larger scale. For example, this could be done using the dataset from VPT. The Minecraft images could be resized to make them smaller, and then it would be possible to use the DreamerV3 codebase (which was tested on MineCraft and runs on a single GPU) then see if it is possible to learn from the unlabelled MineCraft videos. If this works, it would drastically increase the impact of the paper, beyond being something mildly interesting for people who care about this specific topic, to something that catches people's eye across the field.
We agree doing the MineCraft experiment is a great way to improve the impact of this paper, and we have also thought about that. Actually, we had contacted the authors of DreamerV3 about the possibility to open source the pretrained world model immediately after the paper came out. But the authors said they won't do it due to the extra effort to make their model loadable with the open sourced code base. And if we train the world model ourselves, as stated in the appendix of DreamerV3, it will take about 17 days to train their model. And the VPT labelled dataset is almost as twice large as the final reply buffer of the DreamerV3 agent, so it will probably need a month to train a model and not speak about hyperparameter tuning. As a small lab, we don't have enough resource for an experiment at this scale. But we think this is a great suggestion, and we would like to find ways to try it out in the future.
> It seems slightly fishy to use a world model generated dataset to compare a world model based method and an IDM approach. It is possible there is some bias in the Dreamer or P2E data that makes it easier for a more similar approach to do well. Given how brittle many of these methods are, this could make a difference. Would it be possible to instead consider some open-source benchmarks such as VD4RL (Lu et al), which would make it "fair" across different approaches? For example you could use the random or mixed datasets for the embodied and then expert for the demonstration.
That is a good suggestion. We conduct experiments with the V-D4RL main datasets. Please kindly check the experiment setup and results from the general response.
We can see that the performance of both BCO(0) and AIME is generally low, but AIME still outperform BCO(0) which proves AIME can also handle datasets generated by model-free methods. The low performance is due to more constrained setup of the task, i.e. less amount of embodiment data and less diversity. Except the cheetah-medium_replay having 400 trajectories, the other three datasets provided by VD4RL have only 200 trajectories, which is much less than the 1000 trajectories in our paper. Moreover, it is already shown from Fig. 2 and Fig. 3 that random datasets does not help much about learning a model, and intuitively the medium_replay dataset is better but still does not contain enough information to solve the task.
We would like to point out that due to the limited time in the rebuttal period, these are only preliminary results.
> All of these experiments are within the same single environment, with 3 different reward functions. There is no variability in terms of the dynamics or observation space. To take a tiny step towards a "foundational world model" could you consider some variation, such as using the distracting control suite or varied dynamics in the simulator? This is also available in VD4RL so could be used there too. My guess is the world model approach would actually do better and it would make the results more interesting.
Thanks for you suggestion. We conduct additional experiments with the distracting datasets. Please kindly check the experiment setup and results from the general response.
As we can see from the result, although still outperform the BCO(0) baseline, AIME is largely influenced by the distractions. This behaviour is desired since the world model is trained with reconstruction loss. It is not easy to handle observations with distractions. A potential solution to this problem is to freeze only the dynamic part of the world model and allowing the encoders and the decoders also get finetuned in the second phase. But due to the limited time in the rebuttal, we are unable to test this idea.
> This is a very active area of research so the citations seem light.
Thanks for your suggestion. We will cite these papers in the updated version.
> How was the IDM baseline tuned? For example, in the VPT paper they mention different architecture choices made a big difference for performance.
That is a good point. We are sorry that we forgot to add the implementation details of the BCO baseline to the appendix. In order to make a fair comparison in this paper, the IDM and policy are built by using the same network architecture with the world model. Especially for the visual setting, the IDM uses the same CNN as the world model, and the temporal information is handled by stacking the representation from each frame. Then the action is predicted as a TanhGaussian distribution with an MLP. We did a grid search about the width, depth of the MLP and also the number of stacking frames and didn't find any increase of the performance. We will add these details to the Appendix B for the revised version.
We did not conduct experiments on other architecture design, like using a heavy transformer model to handle longer context length. But from Appendix D.1 of the VPT paper, the most important design choice is using the 3D convolution before the CNN to process each frame. We think that is quite similar to stacking the image directly when you have a short context length and don't have a transformer afterward who needs tokens from each frame. And, according to [1], stacking representations is actually a better design choice than stacking frames.
[1] Shang *et al.*, Reinforcement Learning with Latent Flow, NeurIPS 2021
---
Rebuttal Comment 1.1:
Title: Seems good!
Comment: Thank you for your response, it seems sensible. I would still like to see something a bit more ambitious for a super high score, but I think this paper should be accepted. The new experiments do provide incremental confidence given they are open source benchmarks vs. author generated settings. Given the other scores are borderline I am willing to support it with a 7. | Summary: This paper presents an imitation learning approach by first training a world model to predict next observations conditioned on (given) actions, and in a second phase training a policy that amortizes action inference by maximizing the likelihood of observations under a dataset of expert demonstrations. The authors compare their method to BCO(0) on the DMC walker and cheetah environments.
Strengths: The paper is well written, the idea is clearly explained and the benchmark seems well executed.
Weaknesses: The experimental results are hard to parse and have some anomalies (see my questions). Also the strongest claims "AIME outperforms the baselines by a large margin, which indicates the strong generalisability of a forward model over an inverse model. We also find that AIME makes substantially better use of exploratory data." are also mainly based on the Walker experiment, but are less outspoken on the Cheetah dataset.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: How is the performance measured? Is this accumulated reward (normalized to the expert performance)?
There are some questioning data points in the evaluations, for instance:
- AIME on Cheetah run->flip is able to learn from Visual, but not at all from LPOMDP/MDP, same for flip->run
- On flipb->run on Cheetah is impossible to learn, but running backwards is
- BCO(0)-MDP seems to outperform AIME-MDP on xxx->flipb
Do you have any insights on these results, whether this is due to the task at hand, the collected dataset, ...?
Moreover, on Cheetah the resulting agent reaches an overall performance of under 50%. Any idea why this performance gap is there?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors adequately address the limitations in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The experimental results are hard to parse and have some anomalies (see my questions). Also the strongest claims "AIME outperforms the baselines by a large margin, which indicates the strong generalisability of a forward model over an inverse model. We also find that AIME makes substantially better use of exploratory data." are also mainly based on the Walker experiment, but are less outspoken on the Cheetah dataset.
We agree that we did not put enough analysis of the cheetah results. But the conclusion in the paper generally apply to both embodiments, just the improvement on cheetah embodiment is smaller. We will clarify this and add more analysis about the cheetah experiments in the revised version.
> How is the performance measured? Is this accumulated reward (normalized to the expert performance)?
Thanks for bringing this up. Yes, you are right, the performances in all figures are reported based on the normalised accumulated reward. We will add the definition of performance to Sec. 4 in the revised version.
> AIME on Cheetah run->flip is able to learn from Visual, but not at all from LPOMDP/MDP, same for flip->run
We are also confused about this. But we have to point out the RSSM (the dreamer model) is mostly optimised toward visual setting, and we find it harder to tune for other settings. And for the particular settings you mentioned, there is some small similarity between the run and flip tasks, since they all require some subtle movement of the front leg, we conjecture that the change of visual observation can be more pronounced for these subtle movements than the proprioception.
> On flipb->run on Cheetah is impossible to learn, but running backwards is
This is due to the low performance of the run backward expert. In Appendix D, we show the average return of the expert on each task. Run's expert gets a return of $888.65$ while run backward's expert gets only $218.50$. This makes imitating run backward the easiest task among all.
Moreover, we would like to point out that although run and run backward, flip and flip backwards sounds quite similar, due to the asymmetrical structure of the embodiment, the behaviour for solving the tasks can be quite different.
> BCO(0)-MDP seems to outperform AIME-MDP on xxx->flipb Do you have any insights on these results, whether this is due to the task at hand, the collected dataset, ...?
The flips are the hardest tasks in the experiments, since the majority of the time in the expert demonstrations, the cheetah is "flying" in the air and the actions taken there is not relevant for solving the tasks. That leaves only a few actions in the sequence that are actually essential for solving the task. We think this put additional challenge for AIME since it needs to infer a sequence of action, while BCO(0) is operated on point estimation. For example, when the first a few actions cannot output reasonable actions to start the flip, then the later actions will create very noisy gradient since none of them can explain the "flying".
> Moreover, on Cheetah the resulting agent reaches an overall performance of under 50%. Any idea why this performance gap is there?
This is a good question. We think this gap could be the result of different control frequency between the two embodiment. Although it is not well-stated in the literature, with an action repeat of 2 as following other papers, the Walker is running on 20Hz while Cheetah is running on 50Hz. Running in a higher frequency makes each action have smaller influence on the environment, and observations as a consequence. This smaller change makes both the IDM and world model harder to train. There are also works [1, 2] using an action repeat of 4 for the Cheetah environment to get better results, which brings the control frequency to 25Hz, thus closer to the Walker one.
[1] Hafner *et al.*, Learning Latent Dynamics for Planning from Pixels, ICML 2019
[2] Hansen *et al.*, Temporal Difference Learning for Model Predictive Control, ICML 2022
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses on my (and other reviewers') questions. I also appreciate the extra results provided. | Summary: This paper presents an algorithm named AIME to learn the world model and apply it to downstream tasks. In the first stage, AIME learns a world model from a dataset with actions to maximize the likelihood via EBLO. While in the second stage, given observation-only demonstrations, AIME optimizes the action sequence to imitate the expert’s behavior. The empirical result shows that AIME outperforms previous methods in DMC tasks.
Strengths: 1. Optimizing actions from observation-only trajectory via ELBO is somewhat novel compared to behavior cloning methods.
2. AIME performs better than previous methods even with local and image-based observations.
Weaknesses: 1. The major concern is the problem setting of AIME. In my view, more general setting the agent can only get state trajectory in the world-model learning stage while can obtain action-labeled data in the second stage. Then the agent has much more data in training (e.g., human data without actions) and only need a small amount of action-labeled data for fast adaptation. The authors should clarify the significance of the problem settings studied in this paper.
2. Since the world model applies action-labeled data in the first stage, the world model will be related to the policies that generate the dataset. I wonder if the model can handle datasets with a mixture of policies or low-quality policies.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Why the performance seems to have a very large variance in Figure 3?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The major concern is the problem setting of AIME. In my view, more general setting the agent can only get state trajectory in the world-model learning stage while can obtain action-labeled data in the second stage. Then the agent has much more data in training (e.g., human data without actions) and only need a small amount of action-labeled data for fast adaptation. The authors should clarify the significance of the problem settings studied in this paper.
You raise a good point. We agree that having a large amount of observation-only dataset, like YouTube videos, and then adapt to some action-label datasets is a setting that has attracted a lot of attention in recent years. However, the reversed setting, as we used in this paper, also has a lot of potential. By emphasising embodiment, we require the agent to have a dataset about the embodiment in the first place.
It quite fits for robotics learning [1, 2], where people experiment with the same robot for years and naturally have access to a lot of embodiment data. These years, there are more and larger embodiment datasets getting collected [3] and open sourced [4, 5]. Moreover, it also holds true for well-studied simulator benchmarks where embodiment datasets can be easily access through open sourced benchmarks [6, 7, 8]. Besides, using embodiment data before others is also true for lifelong learning of humans. Thinking about infants who first randomly explore what their body can do before they learn some complex motor skills like walking. Last but not least, learning a model for certain embodiments is much easier than learning a general model for all the embodiments, which we don't even have in nature.
Thus, although the setting that first has embodiment datasets is less popular at the moment, it still has a great potential. We will update the paper to make the motivation of this setting more clear.
> Since the world model applies action-labeled data in the first stage, the world model will be related to the policies that generate the dataset. I wonder if the model can handle datasets with a mixture of policies or low-quality policies.
Yes, the dataset for training the world model matters a lot, and it is also one of the conclusion from the paper. In the experiment section, we train the models on multiple datasets. The random dataset can represent the low-quality policies setting, and all the other datasets are collected by a mixture of policies since they stem from a replay buffer where each trajectory is collected by a different policy, and also the mix dataset contains multiple behaviours. We observe that in general, both AIME and BCO(0) don't work well on the low-quality random dataset since the experience from there hardly explains the observations seen in the demonstrations, i.e. lack of information, and the performance generally improved on p2e dataset where the quality of exploration improves. And AIME works well on the mixture of policies as we showed in other experiments.
> Why the performance seems to have a very large variance in Figure 3?
This is because the methods do well on some tasks but fail on others, when you average them all, you get a very huge variance. We agree it is not an ideal visualization, but we think it is still valuable to report the aggregated result. There are reward profile figures in the appendix E where you can find more details of each test trajectory.
[1] Thrun and Mitchell, Lifelong robot learning, Robotics and Autonomous Systems, 1995
[2] Singh, Transfer of learning by composing solutions of elemental sequential tasks, Machine Learning, 1992
[3] Brohan *et al.*, RT-1: Robotics Transformer for Real-World Control at Scale, arXiv 2212.06817
[4] Dasari *et al.*, RoboNet: Large-Scale Multi-Robot Learning, CoRL 2019
[5] Ebert *et al.*, Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain Datasets, arXiv 2109.13396
[6] Fu *et al.*, D4RL: Datasets for Deep Data-Driven Reinforcement Learning, arXiv 2004.07219
[7] Qin *et al.*, NeoRL: A Near Real-World Benchmark for Offline Reinforcement Learning, NeurIPS 2022 Datasets and Benchmarks Track
[8] Baker *et al.*, Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos, arXiv 2206.11795
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the response. The limitation of this work in low-quality data needs more research in the future. I will keep the score unchanged.
---
Reply to Comment 1.1.1:
Title: Regarding low-quality data
Comment: We would like to thank you for your reply.
Regarding the comment on low-quality datasets, we would like to kindly clarify that it will generally influence all the algorithms applied to the problem. In the problem setting, the agent needs to use the knowledge in the embodiment dataset to infer the actions in the demonstration dataset. When the embodiment dataset is in low-quality, meaning it doesn't contain enough knowledge to infer the actions and may make the problem infeasible. To make it more concrete, the Walker-random dataset mainly contains trajectories of the Walker agent laying on the ground, and the Cheetah-random dataset mainly contains trajectories of the Cheetah agent swaying around the starting position. Given this knowledge, it is not possible to fully infer the actions of a complex behaviour like run without additional information. Thus, for any purely data-driven algorithm, the performance of each setup is upper-bounded by the quality of the embodiment dataset, and an algorithm that can utilise the knowledge better can achieve closer results to that upper-bound. In the experiment section, we show AIME is mostly outperforming BCO(0) when using a low-quality random dataset.
Please let us know if there are any further points of concern or clarification needed. | Summary: The paper proposes action inference by maximising evidence as a way for an MBRL to replicate most likely actions using appropriate world models. The algorithm has two phases: 1) Learn the world model based on a replay buffer, and 2) imitate the expert's behaviour by inferring the policy that maximizes the evidence of the demonstration under the policy and world model. Experimental results on the Walker and Cheetah embodiments of the DeepMind Control Suite demonstrate that this zero-shot imitation performance outperforms the current state-of-the-art approaches.
Strengths: - The paper addresses a major issue in deep reinforcement learning (DRL), namely sample inefficiency. By suggesting a method that can harness observational data, the authors propose a way to improve the sample efficiency of DRL agents.
- The paper introduces a new method, Action Inference by Maximising Evidence (AIME), for imitation learning. This method is designed to mimic the human ability to learn quickly from observation, which is an interesting contribution.
- AIME's two-phase learning process, involving the creation of a world model and then using it for imitation learning, is a unique approach. This process allows the agent to understand its own body and the likely actions that led to observed behaviors, which is a crucial aspect of learning.
- The method is capable of "zero-shot" learning, meaning it does not require further training for the world model or online interactions with the environment after being given the demonstration. This is a significant advantage in terms of efficiency and practicality.
- The authors provide empirical validation of their method on the Walker and Cheetah embodiments of the DeepMind Control Suite. They demonstrate that their method outperforms state-of-the-art baselines, which strengthens the credibility of their approach.
Weaknesses: - The paper validates the AIME method using the Walker and Cheetah embodiments of the DeepMind Control Suite, which are simulated environments. It's unclear how well the method would perform in real-world scenarios, where conditions can be more complex and unpredictable.
- The AIME method assumes that the agent can learn a perfect world model from its past experiences. This may not always be possible due to the complexity and unpredictability of many environments. The performance of the method could be affected if the world model is not accurate.
- The effectiveness of the AIME method may heavily depend on the quality of the observation-only demonstrations provided. If these demonstrations are not representative of the task at hand, or if they are of poor quality, the performance of the method could be significantly affected.
- The paper does not discuss the computational complexity of the AIME method or its scalability to larger and more complex tasks. If the method is computationally intensive, it may not be practical for use in real-time applications or on larger scales.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Does the world model training in Phase 1 have to converge because imitation learning can happen? Is this primarily for changes in the task? E.g., going from walking to hopping but with the same agent in the same environment?
- What if the space of actions change between phase 1 and phase 2? Will AIME still work? It doesn’t seem like it.
- Can action trajectories be learnt directly instead of one step action policies?
- How sensitive is the AIME method to the quality and diversity of the observation-only demonstrations provided? What happens if the demonstrations are not representative of the task at hand or are of poor quality?
- What is the computational complexity of the AIME method? How well does it scale to larger and more complex tasks?
- does the AIME method handle situations where the world model learned from past experiences is not accurate or complete? What are the implications if the world model is imperfect?
- How well does the AIME method perform in terms of transfer learning? Can the world model learned in one context be effectively applied to another for the policy?
- How robust is the AIME method to changes in the environment or task? Can it adapt to new situations without requiring additional training?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors point out the following limitations that limit the scope of the current results:
- The AIME method performs well with visual input, but there is a significant performance gap when compared to the LPOMDP setting where low-dimensional signals are observed. This is attributed to the loss surface of the pixel reconstruction loss not being smooth enough to allow the gradient method to find an equally good solution.
- The study only considers the simplest setting where both the embodiment and sensor layout are fixed across tasks. This is a limitation as humans observe others in a third-person perspective and can imitate animals whose bodies are not similar to humans. Relaxing these assumptions could allow for transfer across different embodiments and even directly from human videos.
- For some tasks, even humans cannot achieve zero-shot imitation by only watching others. This could be due to the task's complexity or completely unfamiliar skills. Even with proper instruction, humans still need to practice in the environment and learn something new to solve some tasks. This suggests the need for an online learning phase as an extension to the AIME framework.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Does the world model training in Phase 1 have to converge because imitation learning can happen? Is this primarily for changes in the task? E.g., going from walking to hopping but with the same agent in the same environment?
No, it is not necessary to train the model until converge to enable imitation. It emerges gradually during training. From the results shown in Fig. 1 in the general response, the imitation ability is established in very early phase of the training process long before convergence.
> What if the space of actions change between phase 1 and phase 2? Will AIME still work? It doesn’t seem like it.
In general, we do not care about what action space is used to collect the demonstrations for phase 2, since we only need the observations. AIME will derive a policy in the original action space that the world model is trained on during phase 1. This is actually a benefit of our method, since it allows you to have different action space for the agent and the demonstrator. For example, if you want to demonstrate a task on a robot arm, the low-level action space is not intuitive for a human operator. In our setup we can easily use another more intuitive interface like smart phones [1] to do the demonstration and let the agent to imitate on the low-level action space. But if you are referring to completely changing the action space also in deployment, which is not likely to happen in real life, it will require an extra function to map the action from the old action space to the new action space.
> Can action trajectories be learnt directly instead of one step action policies?
Yes, they can. As we show in the control derivation from Sec. 3.2, we can infer the action trajectories directly by a planning algorithm. But it needs to be done individually for each trajectory. That is why we introduce amortised inference to improve the efficiency. And also one can define the amortised inference model in many different forms, for example $\pi(a_{t:t+T}|s_t)$. We use the one step action policy form to keep it simple and comparable to the baselines.
> How sensitive is the AIME method to the quality and diversity of the observation-only demonstrations provided? What happens if the demonstrations are not representative of the task at hand or are of poor quality?
We have to clarify that, in the setting of imitation, the task is defined by the demonstrations. We aim to replicate the behaviour of the demonstration rather than solving the original task that the demonstrator is trying to solve. This can be viewed as an alternative way of defining the task, i.e. by showing how the task is done, rather than defining a reward function.
About the diversity of the demonstrations, we would like to point you to Fig. 4 of the paper, where we limited the number of demonstrations, which also limited the diversity. We can see this indeed has an effect on the performance, but AIME is less sensitive to this than the BCO(0) baseline.
> What is the computational complexity of the AIME method? How well does it scale to larger and more complex tasks?
For the training complexity, in Appendix A, we show phase 1 requires 10 - 20 hours while phase 2 requires 5 - 10 hours on an old 1080ti GPU. For solving multiple tasks, you only need to run phase 1 once and run phase 2 multiple times for different tasks. Moreover, we would like to mention that 5 -10 hours training time is for training 500 epochs, but from Fig. 8 of Appendix E, normally AIME does not need that long for converging. This makes AIME require even less compute.
For the inference complexity, we are training a RSSM world model from the Dreamer papers. So we can do real-time inference whenever their method can. In a recent paper [2], they successfully run the model to control four different robots in real-time. Thus, our method is also applicable for the real-time scenarios.
For the scalability, we also run experiments on the mix dataset, where the dataset is 3 times larger than other datasets in this paper. The improved results suggest AIME can handle a larger scale of data.
> Does the AIME method handle situations where the world model learned from past experiences is not accurate or complete? What are the implications if the world model is imperfect?
We didn't assume the world model learnt on the first phase is perfect. And actually, all the pretrained world models considered in the experiments are not perfect, since the training dataset did not cover all the dynamic of the embodiment.
When the world model is imperfect, it could degenerate the imitation performance, like in the convergence experiment, or in the worst case leads to failure like divergence. We show a few of these failure cases in the Appendix E.
> How well does the AIME method perform in terms of transfer learning? Can the world model learned in one context be effectively applied to another for the policy?
This transfer ability is exactly what we show with AIME. In the majority of the experiments, we do cross task transfer. For example, we learn a world model on the stand task, then imitate the policy of the run task. The results showcase the strong transfer ability of the world model.
> How robust is the AIME method to changes in the environment or task? Can it adapt to new situations without requiring additional training?
In this paper, we mainly assume the same environment in both phases. In general, the robustness to environment change can be injected through diverse training dataset or domain-knowledge-based data augmentation. In term of tasks, that is exactly what we show in the paper. The pretrained world model exhibit the zero-shot ability for imitation, without further environment interactions to finetune the model.
[1] Mandlekar *et al.,* Scaling Robot Supervision to Hundreds of Hours with RoboTurk: Robotic Manipulation Dataset through Human Reasoning and Dexterity, IROS 2019
[2] Wu *et al.*, DayDreamer: World Models for Physical Robot Learning, CoRL 2022
---
Rebuttal Comment 1.1:
Comment: Thank you for your response - I am convinced about the technical contribution of the paper and happy to support acceptance. I will be increasing my score by 1 point. | Rebuttal 1:
Rebuttal: We thank all reviewers for their time and insightful feedbacks.
We conduct three new experiments suggested by the reviewers, the results are provided in the pdf:
**We evaluate multiple checkpoints during the course of the world model pretraining to address the convergence concerns from reviewer Cs3Z. To be specify, we retrain the model on walker-mix dataset for 2000 epochs (prolong from 1000 epochs in the paper) and save a checkpoint every 100 epochs. Then, all the 20 saved models are evaluated by imitating the run policy.**
From the results shown in Fig. 1, the imitation ability is established in very early phase of the training process long before converge.
**We conduct experiments on the VD4RL [1] main datasets, as suggested by reviewer MApq, to prove that AIME can also work with dataset generated by model-free methods. We use their random and medium_replay datasets from Walker and Cheetah as embodiment datasets. Besides, we also merge these two datasets to form a mix dataset. The models are trained on the three datasets for each embodiment. Then we treat the expert datasets in the benchmark as demonstration dataset.**
We can see that the performance of both BCO(0) and AIME is generally low, but AIME still outperform BCO(0) which proves AIME can also handle datasets generated by model-free methods. The low performance is due to more constrained setup of the task, i.e. less amount of embodiment data and less diversity. Except the cheetah-medium_replay having 400 trajectories, the other three datasets provided by VD4RL have only 200 trajectories, which is much less than the 1000 trajectories in our paper. Moreover, it is already shown from Fig. 2 and Fig. 3 that random datasets does not help much about learning a model, and intuitively the medium_replay dataset is better but still does not contain enough information to solve the task.
**We conduct experiments on the VD4RL [1] distracting datasets, as suggested by reviewer MApq, to test the performance of AIME on distracting datasets. For the walker embodiment, the benchmark provides random datasets with a distraction level of easy, medium, and hard. We also merge these three level to a mix dataset. Moreover, we also merge this mix dataset with the mix dataset in the second experiment to form a total_mix dataset. We treat these five datasets as the embodiment dataset and the expert dataset as the demonstration dataset. For the cheetah embodiment, the benchmark provides medium and expert datasets with a distraction level of easy, medium, and hard. We subsample the medium datasets to get 200 trajectories with each level, then merge that with the mix dataset in the second experiment to form a total_mix dataset. Then the algorithms are using this total_mix dataset as the embodiment dataset and the expert dataset as the demonstration dataset.**
As we can see from the result, although still outperform the BCO(0) baseline, AIME is largely influence by the distractions. This behaviour is desired since the world model is trained with reconstruction loss. It is not easy to handle observations with distractions. A potential solution to this problem is to freeze only the dynamic part of the world model and allowing the encoders and the decoders also get finetuned in the second phase. But due to the limited time in the rebuttal, we are unable to test this idea.
We would like to mentioned that due to the limit time during rebuttal, we only run AIME and BCO(0) baselines with the default parameters we used in the paper. Further hyper-parameter tuning could achieve a better result.
For individual questions raised by each reviewer, please find the responses below.
[1] Lu *et al.*, Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations, TMLR 2023
Pdf: /pdf/9bb066c7930a54d777ca902cbb68544e30a4015e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective | Accept (poster) | Summary: This paper studies the cause of neural text degeneration, i.e. language models tend to generate repetitive loops. They design an experiment showing that text degeneration is correlated with the amount of repetitive text in the training data. Motivated by this finding, they propose repetitive dropout, which applies dropout on the attention weights over repetitive context. They experiment with 3 datasets and demonstrate that their proposed method can greatly reduce repetition. They also re-inspect previous hypotheses:
1. high in-flow words: they show that merging repetitive words contributes to a large portion of the effect of merging high in-flow words.
2. the maximum likelihood objective: they show that even when the model is trained with the maximum likelihood objective, as long as repetitions are penalized, the model does not degenerate much. Thus, the maximum likelihood objective is not the cause of neural text degeneration.
3. self-reinforcement of repetition: the argue that
a. It is also caused by the repetitive words in the training data.
b. The self-reinforcement loops are broken by their proposed method.
Finally, they categorize repetitions into three categories and study their frequency along with their effect on models’ degeneration behavior. They find that the *theme* category has the greatest effect.
Strengths: 1. Most parts of the paper are easy to follow.
2. Neural text degeneration has been discovered since 2019, but its cause is still unclear. Finding the root cause may interest audiences in this field.
3. Despite the hypothesis being simple, as far as I know, it hasn’t been studied. The experiment design in Section 4 is clever.
4. They proposed a simple method that can reduce repetition effectively.
5. They inspected a few previous hypotheses and the results are aligned with their hypothesis (in general).
6. They inspected the effect of three types of repetitions in the training data, which in my opinion is insightful.
In sum, in my opinion, their analyses of the cause of neural text degeneration is comprehensive and the conclusion is convincing.
Weaknesses: In my opinion, because the main message of this paper is clear enough, the following issues may not be very crucial.
1. The authors only compare their method with one baseline in Table 1. There are other mitigations for text degeneration, e.g. [14, 22, 11] cited in the paper.
2. Section 6.2 is relatively hard to follow.
1. Line 248, “The core idea of many previous works is to penalize a specific set of data to alleviate the degeneration.” is vague, though I can roughly figure out what it means after reading the following parts.
2. Having a brief introduction for the definition of high-inflow words will make the paper more self-contained.
3. For the Likelihood Objective part, the paragraph at line 283 needs a clearer topic sentence. The connection between the experiment design in paragraph at line 288 and the argument in the previous paragraph is not very clear to me either.
3. At Line 312, the author mentioned “we find that the model trained by repetition dropout can break the self-reinforcement loop”. Some evidence should be provided.
4. At Line 223, “we hypothesize that the exposure bias issue is an important factor for the left repetition, because we find that lower-order n -gram2…”, I couldn't understand the explanation. Some details should be provided too.
repetitions generally appear earlier than higher-order repetitions
4. Given the popularity of large language models, the study on GPT-2-sized models may be less useful for the general audience.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. If repetition in the training data is the cause of degeneration, why not simply remove those repetitive text from the training data? This seems to me a more straightforward method than applying attention dropout.
2. If the authors promise to address the issues in the weakness section above, I would be happy to increase my score, because I think the main message is clear and interesting enough.
3. Having some analyses on large language models would make this work more impactful, e.g.
1. If I understood correctly, the analyses in Section 6.3 can also be done for LLMs.
2. I also wonder what would be an effective way to fine-tune an existing (large) model so it doesn't generate repetitive text.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are discussed in Section 7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q: The authors only compare their method with one baseline in Table 1.**
*Table 1: Added experiments of ScaleGrad on FreeLaw*
| | Rep-2 | Rep-3 | Rep-4 | Rep-w | Rep-r |
|-------------|-------|-------|-------|-------|-------|
| MLE | 51.74 | 46.19 | 42.22 | 39.22 | 73.06 |
| ScaleGrad | 15.82 | 10.28 | 7.52 | 15.97 | 32.40 |
| Rep-Dropout | 10.15 | 5.60 | 3.49 | 17.55 | 23.21 |
*Table 2: Added experiments of ScaleGrad on OpenWebText2*
|| Rep-2 | Rep-3 | Rep-4 | Rep-w | Rep-r |
|-------------|-------|-------|-------|-------|-------|
| MLE| 73.96 | 70.61 | 67.91 | 67.28 | 88.27 |
| ScaleGrad | 26.61 | 21.26 | 17.98 | 26.08 | 45.21 |
| Rep-Dropout | 25.24 | 16.14 | 11.10 | 34.73 | 49.80 |
Thank you for your feedback. In our original paper, we conducted experiments on three datasets, as presented in Table 1. For the Wikitext-103 dataset, we compared our method against four baseline approaches: MLE, ScaleGrad, HI-Re, and UL. However, for the FreeLaw and OpenText2 datasets, we only compared our method with the MLE baseline to demonstrate its effectiveness, because in their original works they didn’t evaluate on the FreeLaw and OpenText2 datasets.
As per your suggestion, we have now included additional baselines for the FreeLaw and OpenWebText2 datasets. Due to the time limit of the response period, we have incorporated the performance of the most effective baseline, ScaleGrad, for these two datasets.
The updated results can be found in Tables 1 and 2. Our methods consistently outperform both MLE and ScaleGrad across the majority of Rep-X metrics. The findings on the two datasets align with those observed on the Wikitext-103 dataset. We will report these additional experiments in the revised paper.
**Q: Section 6.2 is relatively hard to follow.**
Thanks for your valuable suggestions. We will follow your comments and elaborate those parts with more details in the revised version.
**Q: At Line 312...**
Thanks for your valuable suggestion! This conclusion was made by inspecting the generated results of MLE and our method. In the revised paper, we promise to attach a case study to support this conclusion.
**Q: At Line 223**
We appreciate your feedback and will clarify this aspect in the revised version. The statement in question aims to explain why our method's Rep-n score did not reach human-level performance. Upon examining the challenging cases encountered by our method, we identified an error accumulation phenomenon, where higher-order repetitions typically occur following lower-order repetitions.
As discussed in previous works, an LM is more prone to fall into repetitive patterns when facing unseen states, which is referred to as the exposure bias issue. Consequently, when our model encounters lower-order repetitions that were not observed in training, it tends to generate more severe degeneration.
**Q: ... Study on GPT-2-sized models may be less useful for the general audience.**
*Table 3: Rep-2 score of text generated by OPT models on five datasets using greedy search.*
| Dataset\Models | opt-125m | opt-350m | opt-1.3b | opt-2.7b | opt-6.7b | opt-13b | opt-30b | opt-66b |
|----------------|----------|----------|----------|----------|----------|---------|---------|---------|
| OpenWeb | 69.66 | 67.74 | 58.80 | 54.75 | 51.68 | 50.46 | 46.17 | 47.52 |
| Wiki-103 | 73.77 | 70.50 | 61.47 | 58.24 | 54.62 | 53.73 | 50.29 | 51.70 |
| FreeLaw | 72.80 | 69.90 | 60.37 | 56.90 | 51.95 | 50.18 | 48.45 | 47.44 |
| PubMed | 72.68 | 69.28 | 61.52 | 57.33 | 54.98 | 52.52 | 51.46 | 51.21 |
| ArXiv | 76.25 | 75.16 | 66.46 | 62.67 | 59.75 | 58.25 | 56.49 | 54.92 |
We appreciate your valuable suggestion. Indeed, we initially discussed this aspect in an earlier version of our paper. However, we decided to remove it from the final submission to maintain focus and due to the inability to thoroughly investigate all factors within a single paper. We will reintroduce this discussion in the revised paper.
As shown in Table 1, we evaluated the Rep-2 score of OPT models with parameters ranging from 125M to 66B on the five datasets. The results indicate that increasing the model size does help alleviate the repetition issue to some extent. However, the gains from increasing the model size diminish as the size grows. Notably, the OPT-66B model still generates text with high Rep-2 score.
**Q: ... why not simply remove those repetitive text from the training data?...**
Thanks for the interesting question. Actually, our preliminary study in section 4 is based on a similar idea. However, it is important to note that human text with repetitive n-grams doesn’t indicate low quality. Discarding such data points could have negative effects on the model training. To address this concern, we proposed the repetition dropout method. This approach selectively masks parts of repetitive n-grams, enabling more efficient utilization of the training data.
**Q: ... the analyses in Section 6.3 can also be done for LLMs.**
Thanks for your suggestion. Yes, the analyses in section 6.3 can also be conducted on LLMs. We will leave the through analysis about LLMs in our future work.
**Q: ... an effective way to fine-tune an existing (large) model ...**
*Table 2: Experiments on GPT-XL (1.5 Billion parameters)*
|| rep-2 | rep-3 | rep-4 | rep-w | rep-r |
|-------------------|-------|-------|-------|-------|-------|
| MLE| 54.26 | 49.21 | 45.84 | 66.10 | 37.72 |
| MLE + Rep-Dropout | 11.36 | 5.80 | 3.67 | 24.39 | 18.19 |
Thanks for this good question. Our method can be directly extended to LLMs. In Table 2, we use our method to fine-tune the GPT-XL model for three epochs. As shown in Table 2, our model can also significantly alleviate the repetition issue of GPT-XL, which further validates the effectiveness of our proposed approach..
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I enjoy reading your solid analyses. I have raised my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Pdys
Comment: Dear Reviewer Pdys,
We are grateful for your insightful feedback and your recognition of our efforts to address the concerns. Thank you so much for increasing the scores.
Regards,
Authors | Summary: In this paper, the authors demonstrate that repetition in the training data is a major cause of the neural text repetition problem. They first show a strong correlation between the repetition ratio of training data and generated text. Based on the observation, they propose repetition dropout to prohibit the model from learning repetition in data during training. Experimental results show that repetition dropout significantly addresses the repetition problem compared to previous approaches and penalizing repetition in training data is a key factor for reducing the problem. In analysis, the authors demonstrate that their method specifically reduces repetition caused by subject matter rather than grammar or highly frequent phrases.
Strengths: - The proposed method and supporting experiments are well-motivated and the findings are interesting.
- They significantly reduce the neural text repetition problem compared to previous work.
Weaknesses: - To demonstrate that the distribution of generated texts is close to that of human texts, authors could further utilize metrics such as MAUVE [1].
- Experiments on various parameter sizes would be beneficial for further understanding such as "Does increasing model size enhance the robustness to the repetition problem" or "How are the previous and proposed methods effective as the model size increases?"
- A case study on samples generated from the models would provide further insights.
[1] Pillutla, Krishna, et al. "Mauve: Measuring the gap between neural text and human text using divergence frontiers." Advances in Neural Information Processing Systems 34 (2021): 4816-4828.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Is there a possible reason for a relatively high repetition ratio in OpenWebText2?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As written in the Limitation section, experiments on various scales could be conducted in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q: metrics such as MAUVE**
*Table 1: MAUVE scores on Wikitext-103*
| | Rep-4 ⬆️| MAUVE ⬆️| PPL ⬇️ |
|-------------|-------|-------|-------|
| MLE | 32.64 | 49.70 | 21.98 |
| HI-Re | 28.35 | 35.83 | -- |
| ScaleGrad | 5.01 | 52.80 | 39.11 |
| UL | 22.88 | 50.06 | 21.93 |
| Rep-Dropout | 2.14 | 52.20 | 28.26 |
| | | | |
Thanks for your kind suggestion. Most results on MAUVE score are consistent with those on rep-n score. As shown in Table 1, our method and ScaleGrad achieve the best performance on Wikitext-103. One exception is the High-Inflow Re-encoding baseline, which achieves the worst performance in terms of MAUVE.
**Q: Does increasing model size enhance the robustness to the repetition problem? How are the previous and proposed methods effective as the model size increases?**
*Table 2: Rep-2 score of text generated by OPT models on five datasets using greedy search.*
| Dataset\Models | opt-125m | opt-350m | opt-1.3b | opt-2.7b | opt-6.7b | opt-13b | opt-30b | opt-66b |
|----------------|----------|----------|----------|----------|----------|---------|---------|---------|
| OpenWeb | 69.66 | 67.74 | 58.80 | 54.75 | 51.68 | 50.46 | 46.17 | 47.52 |
| Wiki-103 | 73.77 | 70.50 | 61.47 | 58.24 | 54.62 | 53.73 | 50.29 | 51.70 |
| FreeLaw | 72.80 | 69.90 | 60.37 | 56.90 | 51.95 | 50.18 | 48.45 | 47.44 |
| PubMed | 72.68 | 69.28 | 61.52 | 57.33 | 54.98 | 52.52 | 51.46 | 51.21 |
| ArXiv | 76.25 | 75.16 | 66.46 | 62.67 | 59.75 | 58.25 | 56.49 | 54.92 |
| | | | | | | | | |
Thank you for your valuable suggestion. We did discuss this aspect in an earlier version of our paper but decided to remove it from the final submission, as we felt it might detract from the paper's focus. As demonstrated in Table 1, we evaluated the rep-2 score of OPT models with parameters ranging from 125 Million to 66 Billion on the five datasets used in our final paper. Our results show that increasing the model size does alleviate the repetition issue to some extent. However, the gains achieved by increasing the model size diminish over time, and the OPT-66B model continues to generate text with an extremely high rep-2 score.
*Table 3: Experiments on GPT-XL (1.5 Billion parameters)*
| | rep-2 | rep-3 | rep-4 | rep-w | rep-r |
|-------------------|-------|-------|-------|-------|-------|
| MLE | 54.26 | 49.21 | 45.84 | 66.10 | 37.72 |
| MLE + Rep-Dropout | 11.36 | 5.80 | 3.67 | 24.39 | 18.19 |
Because of the time limit of the rebuttal period, we only evaluated our method and one baseline method on GPT-XL, which has 1.5 billion parameters. We fine-tuned the GPT-XL using MLE and our method for 3 epochs. As shown in Table 3, we find that fine-tuning a larger model with our method can also significantly alleviate the repetition issue.
**Q: A case study on samples generated from the models would provide further insights.**
Thanks for the useful suggestion. In the revised version, we will add a case study section to show the characteristics of our method and the baseline method.
**Q: Is there a possible reason for a relatively high repetition ratio in OpenWebText2?**
Thanks for your good question. The data in OpenWebText2 comes from Reddit where each thread is discussing a particular topic with diversity in text style and many people are involved. In the section 6.3, we find that LMs would spend more effort on learning the repetition of theme-related n-grams to reduce the model’s PPL. Consequently, we hypothesize that the Reddit data may encourage the repetition behavior in the learning process. That’s also why we conduct experiments on multiple datasets in section 4.
---
Rebuttal Comment 1.1:
Comment: Thank you for the experimental results which have addressed my curiosity. I could notice the repetition problem does not resolve by simply scaling the language model and Rep-Dropout is effective for the larger models. I think further studies on scaling and decoding strategies would also be meaningful.
I raised the score after reading the rebuttal.
---
Reply to Comment 1.1.1:
Comment: Thanks for your constructive feedback and raising the overall score! We agree that it is meaningful to investigate the scaling and decoding strategies, and we also planned to compare those different factors in a unified framework. We will leave the detailed investigations to our future work. | Summary: The paper explores the issue of degeneration in text generation, which refers to the generation of repetitive words and dull loops by neural language models. The authors focus on the impact of repetition in the training data and propose a method to address this issue. Specifically, they suggest dropping out repetitive words during the training data pre-processing stage. The main conclusion drawn from their investigation is that penalizing repetitive n-grams in the training data is crucial for the effectiveness of existing methods in preventing degeneration. By highlighting the importance of addressing repetition in the training data, the paper contributes to understanding the factors influencing degeneration in text generation.
Strengths: The paper is generally well-written and is easy to follow. The problem of generating repetitive words and dull loops has been widely observed and extensively discussed in the field, which makes this problem important and worth exploring. The conclusion drawn by the authors, that training data plays a pivotal role in the occurrence of degeneration, is logical and well-supported.
Weaknesses: The paper primarily focuses on data pre-processing rather than novel training techniques. This limited scope may diminish its technical novelty. Many researchers working on text generation may have already explored similar data pre-processing ideas, making the proposed method less impactful or original. Also, the conclusion of the paper lacks novel insights and seems to affirm an intuitive understanding. While it is reasonable to assume that data quality has a direct impact on degeneration, this insight does not offer any new or surprising findings. As a result, the conclusion may be considered weak in terms of offering novel contributions or expanding the current understanding of the issue.
Explore additional factors that could contribute to degeneration in text generation, such as the number of training data and model size. Investigating these factors may provide deeper insights into the phenomenon and allow for a more comprehensive understanding of degeneration. This expansion could strengthen the paper and enhance its impact within the field.
For me, it would be more interesting to study other factors that impact degeneration, such as the number of training data, the model size and the model architecture. Investigating these factors may provide deeper insights into the phenomenon and allow for a more comprehensive understanding of degeneration. With more training data and more parameters, large language models can capture semantics and thus prevent degeneration. For example, degeneration has been greatly resolved for ChatGPT due to its expressiveness and more general understanding of texts, In my opinion, tackling degeneration in data pre-processing is not very meaningful because the root cause is still the model lacks of capability to understand and generate reasonable texts.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Does dropping some repetitive n-grams sometimes distort the semantics of the document?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There are other potential factors that can impact the degeneration. The paper should have a more comprehensive discussion about all the factors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q: While it is reasonable to assume that data quality has a direct impact on degeneration, this finding is not surprising**
We'd like to emphasize that the goal of our work is to investigate the relationship between repetition in training data and degeneration. It is important to note that data containing repetitions is not necessarily of low quality. In fact, the repetition of certain words in natural language is sometimes necessary [1,12,23] (cited in paper). The problem is that LMs will learn and amplify the repetition in training data, which has not been studied in previous works. In addition, we give a unified interpretation about the reason of success of many previous works from the data perspective.
In recent years, data-centric AI has garnered the attention of numerous researchers, as many model behaviors may be linked to patterns in the data. By establishing a clear connection between repetitions in training data and the degeneration issue, we believe that researchers can gain a better understanding of degeneration and develop more targeted and effective methods.
**Q: Explore additional factors**
Table 1: Rep-2 score of text generated by OPT models on five datasets using greedy search.
| Dataset\Models | opt-125m | opt-350m | opt-1.3b | opt-2.7b | opt-6.7b | opt-13b | opt-30b | opt-66b |
|----------------|----------|----------|----------|----------|----------|---------|---------|---------|
| OpenWeb | 69.66 | 67.74 | 58.80 | 54.75 | 51.68 | 50.46 | 46.17 | 47.52 |
| Wiki-103 | 73.77 | 70.50 | 61.47 | 58.24 | 54.62 | 53.73 | 50.29 | 51.70 |
| FreeLaw | 72.80 | 69.90 | 60.37 | 56.90 | 51.95 | 50.18 | 48.45 | 47.44 |
| PubMed | 72.68 | 69.28 | 61.52 | 57.33 | 54.98 | 52.52 | 51.46 | 51.21 |
| ArXiv | 76.25 | 75.16 | 66.46 | 62.67 | 59.75 | 58.25 | 56.49 | 54.92 |
Thank you for your valuable suggestion. In the early version of our paper, we did evaluate factors beyond the data, such as the impact of model architecture and model size.
We obtained some interesting findings. For instance, in Table 1, we assessed the rep-2 score of OPT models with parameters ranging from 125M to 66B on the five datasets used in our final paper. Our results show that increasing the model size does alleviate the repetition issue to some extent. However, the gains achieved by increasing the model size diminish over time. The OPT-66B model still generates text with high rep-2 score. We also evaluated the impact of various model architectures, such as enc-dec Transformer models, dec-only Transformer models, LSTM models, etc. All models trained by MLE exhibit severe repetition issues, with no clear indication of which architecture suffers more from this problem.
Nevertheless, we decided to remove those sections from final submission, which was a difficult decision. The reason behind this is that it is challenging to thoroughly analyze and explain all these factors within a 9-page paper. As a result, we chose to focus on one critical factor, the repetition in training data.
We can reintroduce the discussion about other factors in the appendix of the revised paper.
**Q: The paper primarily focuses on data pre-processing rather than novel training techniques.**
We appreciate your concern and would like to clarify that while our analysis paper includes experiments with data pre-processing to clearly demonstrate the research problem, our method to alleviate degeneration is a learning algorithm, not a data pre-processing method. Specifically, inspired by dropout, we propose an attention-based repetition dropout method to encourage the model to make predictions without relying on repetitive n-grams in the context. As demonstrated in our paper, the LM trained using our method achieves exceptionally low rep-n scores at a much lower cost than scaling up the model and data size.
**Q: With more training data and more parameters, LLMs can prevent degeneration, e.g., ChatGPT...**
We agree that increasing the number of data and model size is likely to improve the performance. However, as shown in Table 1, OPT-60B still suffers from severe degeneration and increasing the size can only alleviate the degeneration issue to a certain extent. E.g., the rep-2 improves from 70 to 47 when the model size increases from 125M to 60B. This fact shows that the degeneration issue can not be mainly attributed to the small scale of model size and training dataset size. Moreover, in our preliminary experiments, lots of LLMs, including GPT-3 (e.g., text-davinci-002, 175B parameters), continue to suffers from the degeneration issue. Fortunately, simply dropping out the attentions relating to repetitions in training data can effectively reduce the repetition to an extremely level even with relatively small scale of model size and training data. Therefore, we think the repetitions in training data is a crucial factor for the degeneration issue.
As you pointed out, we also observed that ChatGPT exhibits less degeneration. However, ChatGPT is a product-level system got from a long development pipeline. To the best of our knowledge, there is no convincing literature within our community explaining how ChatGPT achieved this. In fact, this is a research topic that we are currently exploring.
We will add this discussion to the revised paper to clarify our motivation.
**Q: Does dropping some repetitive n-grams sometimes distort the semantics of the document?**
Yes, it is possible. This is a common potential issue of many previous works, e.g., BERT, which also masks part of the text. That’s why we only apply the repetition dropout in the training time. In other words, the word at each time step can access all the prefix words during the inference time. When conducting human evaluation on the generated results, we also find that the text quality of our method is much higher than the MLE method.
---
Rebuttal Comment 1.1:
Title: Raise my rating to 6
Comment: Thanks for the rebuttal, which addressed my primary concerns. The observation that scaling up a model size can somewhat mitigate repetition while still presenting a significant challenge for larger models is interesting. I also notice that the models discussed in the rebuttal are not instruction-tuned models. I'm curious if there's a specific rationale behind this choice. Although I recognize ChatGPT is not comparable due to its limited accessibility, I believe it could be insightful to include a comparison of instruction-tuned models (i.e. alpaca/vicuna/Llama2) and the model without instruction-tuning (i.e. LLama). Also, it would be interesting to analyze how rep scores connect to the downstream generative task performance metrics like ROUGE scores for summarization.
---
Reply to Comment 1.1.1:
Comment: Many thanks for your kind reply to our previous response and raising the scores! We really enjoy the discussion with you, and we are happy that our response addressed your primary concerns,
**Q: I also notice that the models discussed in the rebuttal are not instruction-tuned models. I'm curious if there's a specific rationale behind this choice.**
The main reason of this choice is to make a fair comparison with previous works, because most previous works only focus on the standard language model.
**Q: I believe it could be insightful to include a comparison of instruction-tuned models (i.e. alpaca/vicuna/Llama2) and the model without instruction-tuning (i.e. LLama).**
Table 1: Results of Llama2-7B on instruction-following data. The column "Rep-2 of FT Data" indicates the rep-2 score of the training data used for fine-tuning. The rest Rep-2, Rep-3, and Rep-4 scores are evaluated on the generated text by different methods. The "FT" means fine-tuning.
| No. | Methods | | Rep-2 of FT Data | | Rep-2 | Rep-3 | Rep-4 |
|-----|-------------------------------|---|------------------|---|-------|-------|-------|
| 1 | Llama2 w/o FT | | -- | | 47.79 | 41.97 | 38.52 |
| 2 | FT Llama2 on Alpaca | | 5.54 | | 15.08 | 10.91 | 8.93 |
| 3 | FT Llama2 on Alpaca + WT-103 50K | | 9.67 | | 41.63 | 35.64 | 32.29 |
| 4 | FT Llama2 on WT-103 | | 10.31 | | 54.10 | 49.77 | 36.80 |
Thanks for this valuable suggestion. We agree that analyzing the effect of instruction-tuning on degeneration is meaningful. We conduct experiments on three datasets:
1. **Alpaca**: The instruction-tuning dataset used by Alpaca [1].
2. **WT-103 50K**: We randomly sample 50k sentences from Wikitext-103 and convert them to the instruction-following data. More details are at the end of this response.
3. **Alpaca + WT-103 50K**: The mixture of Alpaca and WT-103 50K
As shown in Table 1, the “Llama2 w/o FT” (Line 1) indicates the LLM without instruction-tuning, and “FT Llama2 on Alpaca” (Line 2) means the Llama2 with instruction-tuning. We can find that the instruction-tuning process does alleviate the degeneration issue.
[1]: https://crfm.stanford.edu/2023/03/13/alpaca.html
However, we hypothesize that the alleviation of degeneration is caused by that the training data of **Alpaca** has less repetitions. As shown in Table 1, the rep-2 scores of the **Alpaca**, **Alpaca + WT-103 50K**, and **WT-103 50K** datasets are 5.54, 9.67, and 10.31, respectively. We can find that the degeneration issue becomes severer if we fine-tune the model on instruction-following data with higher repetition rate (Line 2-4 in Table 1). This observation further demonstrates that the degeneration issue has a high correlation with the repetitions in training data during the instruction-tuning process, which is consistent with the finding in our paper.
Implementation details of our experiments:
1. **Fine-tuning strategy**: we use the QLoRA to fine-tune the Llama2-7B model, due to the limited computational resources.
2. **Decoding strategy**: greedy search
3. **Test Data**: The test set of Wikitext-103 in the instruction-following format.
4. **Data pre-processing**: To ensure a fair comparison, we convert the wikitext-103 dataset to a instruction-following dataset by using the following template:
```
{
"instruction": "Please continue writing based on the following prefix. The text to be continued should be relevant, fluent and informative.",
"input": PREFIX, # prefix of a sentence
"output": COMPLETION # the completion of the prefix
}
```
**Q: Also, it would be interesting to analyze how rep scores connect to the downstream generative task performance metrics like ROUGE scores for summarization.**
Thanks for your suggestion. The downstream tasks, e.g., summarization, may also suffer from the degeneration issue. We agree it is interesting to evaluate our methods and findings on those tasks. We will leave the further investigation to future work. | Summary: This paper explains the repetition in model generated text from a data standpoint, pointing out that there is a strong correlation between the degeneration issue and the presence of repetitions in training data. The authors find out that penalizing repetitions in data can alleviate degeneration, and propose a method, repetition dropout, that is to apply dropout in transformers’ sublayers during training time. The proposed method achieve significant improvement in terms of REP-n scores.
Strengths: - This is a very well written paper! The preliminary study does a great job in introducing and clearly motivating the problem.
- The proposed repetition dropout is also a very simple technique, yet still improve the performance in REP-n stats.
- The comparison between different methods and objectives in section 6 is very interesting and thorough.
Weaknesses: Currently it seems that the repetition dropout mask is applied on each instance, but what about repeated text in different instance? How should we apply it in current LM training paradigm?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: NA
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q: Currently it seems that the repetition dropout mask is applied on each instance, but what about repeated text in different instance?**
Thanks for the question. To ensure that we understand your query correctly, we would like to confirm that by "repeated text" you are referring to repetitive n-grams, and by "instance" you mean a sentence. If our interpretation is accurate, in our work, the decision to drop an n-gram through the attention mechanism is entirely dependent on its context, i.e., a sentence with 256 words. In other words, if an n-gram appears only once in the context, we will not drop it. However, if it occurs multiple times within the context, there is a possibility that we will mask them according to the dropout rate. Please let us know if we have misunderstood your question or if you require further clarification!
**Q: How should we apply it in current LM training paradigm?**
Table 1: Experiments on GPT-XL (1.5 Billion parameters)
| | rep-2 | rep-3 | rep-4 | rep-w | rep-r |
|-------------------|-------|-------|-------|-------|-------|
| MLE | 54.26 | 49.21 | 45.84 | 66.10 | 37.72 |
| MLE + Rep-Dropout | 11.36 | 5.80 | 3.67 | 24.39 | 18.19 |
Thanks for this good question. The only difference between current LM training paradigm and ours lies in the repetition dropout mask, which could be easily pre-computed before pretaining. Thus, our proposed repetition dropout technique is compatible with most of LM pertaining paradigm. For example, it is easy for us to extend our proposed repetition dropout technique to the larger language model. In Table 1, we directly apply our repetition dropout method to the GPT-XL model, which has 1.5 billion parameters. Since it is difficult to trained GPT-XL model from scratch, we fine-tuned it on Wikitext-103 for 3 epochs. Results in Table 1 demonstrate that our method can also alleviate the degeneration of LLMs after fine-tuning.
---
Rebuttal Comment 1.1:
Title: Thank you for your reply
Comment: Hi,
Thank you for your reply!
>confirm that by "repeated text" you are referring to repetitive n-grams, and by "instance" you mean a sentence
Yes, your interpretation is correct.
I am satisfied of the answers to my question, and my rating will stay the same.
---
Reply to Comment 1.1.1:
Comment: Thanks for your kind confirmation, and we are glad that the previous response addressed your concern. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Differentiable Registration of Images and LiDAR Point Clouds with VoxelPoint-to-Pixel Matching | Accept (spotlight) | Summary: This paper introduces a framework for registering 2D images and 3D point clouds. The framework consists of three branches, for processing 2D images, 3D point clouds and 3D voxel constructed from the original 3D point clouds. The fused features of 3D point clouds and 3D voxel are used to establish correspondences with the features of the 2D images, and a differentiable PnP solver is used to calculate the transformation for end-to-end training. Experiments are conducted on two public datasets, KITTI and nuScenes, and the proposed method outperforms several existing baseline methods.
Strengths: 1.The three-branch structure is novel for this task and seems effective in improving the registration performance.
2.The proposed method achieved much higher performance than existing methods in the experimental setting.
3.The paper is generally well written and easy to follow.
Weaknesses: Major
1.My major concern is the practical meaning of the proposed work. I wonder if there is a real scenario where this kind of registration is essential. Why not establish the correspondences between 2D images and 3D point clouds by device calibration? If we calibrate a LiDAR and an image camera, which should not be difficult, we can register the data captured with it (including both images and point clouds) to either an image or a point cloud by mono-modal registration. This two kinds of mono-modal registrations are both well studied with much higher accuracy.
2.The ablation study has some issues. First, I notice that ablation is conducted on a different setting from the main experiment. Why not use the same setting as in Table 1? In the ablation study, only sequence 0-1 is used for training and only sequence 7 is used for testing, which are much less than the main experiment and makes the ablation results unstable. Second, from the ablation study we can see that removing any one component results in a light decrease of performance, but all ablation study performance are still much higher than baseline methods in Table 1. Existing ablation results can not show where the performance gain of the proposed method comes from. Further analysis is needed, such as removing two or three components at the same.
3.The data are split into training and testing sets, but the author did not mention if a validation set is used and how they chose the best model for inference.
4.RANSAC is used for more robust registration of the proposed method. We know that RANSAC plays a very important role in correspondence-based registration. Can RANSAC also be used in the comparing methods to improve their performance?
Minor
1.The references [31,32] and [19,22] in the first sentence of the second paragraph of Introduction are not proper, since these papers does not study the problem of establishing 2D-3D correspondences.
2.In the introduction of the KITTI datasets, “2D translation on the ground within ±10”, what’s the unit of the translation?
3.“registration accuracy (Acc.)” is not a good name for its current definition. Registration recall may be better.
4.The inference time is 0.19s, which can not be said as real-time.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please respond to questions raised in weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: limitation is not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Practical meaning of the proposed work.**
There are two types of calibration between vehicle-mounted camera and LiDAR, which is pre-calibration and online-calibration, both of them play important roles in the autonomous driving systems. We focus on the online-calibration which is much harder but crucial for autonomous driving scenarios.
The pre-calibration requires manual setup and adjustment, which brings significant labor costs. For example, the KITTI [1] dataset points out that the pre-calibration is not accurate where manual calibration is required to obtain the real transformation. As described in Sec. 2.2 of the KITTI [1] paper, the accurate transformation is achieved by first using the pre-calibration methods and then selecting a few manually correspondences between the LiDAR point clouds and pixels on the images to adjust the transformations, which is very labor-intensive. On the contrary, we aim to calibrate online automatically at any time during driving, which do not require any manual operation.
Another main difficulty in calibration is the fact that the transformation between the vehicle-mounted camera and LiDAR is not constant during driving due to factors such as vehicle shake and rough road conditions. As a result, mis-calibration occurs in almost every frame, and the mis-calibration errors will continue to accumulate over time as the vehicle travels. The naïve pre-calibration methods that only calibrate once are unable to handle these mis-calibrations during driving and require manual adjustment every once in a while.
Our method enables online calibration automatically with a fast inference of only 0.19s per frame, which can handle the mis-calibration errors at any time during driving and provide an intelligent and effective way for improving the robustness of autonomous driving systems.
[1] Geiger, Andreas, et.al. Are we ready for autonomous driving? the kitti vision benchmark suite. CVPR2012
**Q2: The data used for ablation studies.**
We conduct ablation studies to explore each design in our frameworks and some important hyper-parameters like image resolutions, point densities, safe radius, feature dimensions, etc. Using a subset of the large-scale dataset for ablation studies is an efficient way and can justify the effectiveness of the designs.
We leverage the first two sequence of KITTI dataset, which contains about 30% Image-to-Point Cloud pairs of the KITTI dataset, as a subset to conduct comprehensive ablations for efficiently verifying more hyper-parameters and designs.
We agree that conducting ablation studies with the same setting as the main experiment to use the whole KITTI dataset can validate the design choices more thoroughly than only using a subset dataset. We will conduct the ablation studies under the whole KITTI dataset in the revision, due to the limited time in rebuttal period.
**Q3: Comprehensive ablations on the designs.**
We provide the comprehensive ablations on our framework designs in Table H of the rebuttal PDF. By removing all our designs on the Voxel branch, Adaptive-Weighted Optimization and Differential PnP, the performance degenerates from 0.65/2.10/91.14 to 1.98/4.73/61.50 in terms of RTE/RRE/Acc., which is a total failure. We also report the performance using one, two or three of our designs at the same in Table H of the rebuttal PDF, where the results comprehensively demonstrate the effectiveness of each of our design.
**Q4: How to choose the best model?**
We follow previous works (e.g. CorrI2P and DeepI2P) to split the data into training and testing sets for a fair comparison, where no validation set is used. We train our method for 50 epochs and use the final model for inference.
**Q5: Can RANSAC improve other methods.**
Actually, we already use the same setting as ours to apply EPnP with RANSAC to predict the rigid transformation at inference time for the SoTA baseline CorrI2P as described in L.283-L.285 of Sec. 4.2. The reason why CorrI2P fails to achieve accurate registrations is that CorrI2P produces lots of wrong matchings which results in wrong 2D-3D correspondences, and after lots of iterations,RANSAC can hardly eliminate the wrong correspondences. On the contrary, our method produces much better cross-modality matchings and leads to robust 2D-3D correspondences, thus can achieve accurate registrations where only a few iterations are required for RANSAC. The other baseline DeepI2P performs the frustum classification where no 2D-3D correspondences are conducted and is not suitable for EPnP with RANSAC to solve the final pose estimation. However, DeepI2P solves the inverse camera projection problem for pose estimation, with a 60-fold pose initialization for avoiding crashing, which plays a similar role as RANSAC in EPnP.
**Q6: Minor problems.**
1. We will change the references [31, 32] and [19, 22] in the introduction to proper papers studying 2D-3D correspondences.
2. The unit of the translation in L.267 which introduces KITTI dataset is m (meter), we will correct the statement as “2D translation on the ground within ±10 m”.
3. We will change the name “Registration accuracy” to “Registration recall” in the full text following your suggestions.
4. We will correct the “real-time” statement into “fast registration with 5 fps”.
---
Rebuttal Comment 1.1:
Title: comments on response
Comment: My concerns on this papere are well addressed in the response though some of them can only be included in further revision because of the time limit. Especially, I appriciate the detailed ablation study conducted in the response. I update my score to weak accept.
---
Reply to Comment 1.1.1:
Title: Thanks to reviewer Ehvn
Comment: Many thanks for all the helpful comments and positive assessment. We really appreciate reviewer Ehvn for upgrading the score. | Summary: This work aims to address the image-to-point cloud registration task. The authors propose a VoxelPoint-to-Pixel matching framework, which consists of three network branches dedicated to extracting features from voxel, point, and pixel representations, respectively, for 2D-3D matching. The network is trained with four different losses: an overlap prediction loss, a circle loss for 2D-3D feature matching, a KL divergence loss for probabilistic PnP, and a pose loss supervised by ground-truth. To assess the effectiveness of their approach, the authors perform experiments on two existing LiDAR datasets, showing improved 2D-3D matching performance compared to existing methods.
Strengths: - The proposed 2D-3D matching framework combines several well-established techniques. To bridge the domain gap between points and pixels, the authors suggest employing a triplet network to extract element-wise 2D and 3D features. To enhance the robustness of the 2D-3D matching process, the authors incorporate an overlap prediction branch. The whole pipeline is supervised by a 2D-3D matching (circle) loss and pose estimation (PnP) losses.
- The authors assess the matching performance on the KITTI and nuScenes benchmarks, demonstrating that their method has better performance when compared with DeepI2P and CorrI2P.
- The paper is generally easy to comprehend. I appreciate the presence of suitable illustrations accompanying the explanations in Sec. 3.
Weaknesses: - One main concern for this work is its limited technical innovations: the adaptive-weighted loss is basically the circle loss [39], while the KL divergence loss for probabilistic PnP is borrowed from Epro-PnP [8]. Additionally, there is a lack of ablation studies for those four losses - it is unclear how the KL divergence loss $L\_{KL}$ and the pose loss $L\_{pose}$ contribute to the matching performance and whether both of them are necessary.
- For the experiments on the KITTI and nuScenes datasets, the comparisons are limited to DeepI2P and CorrI2P. However, several other works related to 2D-3D matching are absent from the comparisons, such as the ones below. Additionally, it would be interesting to assess the generalization of the proposed method across datasets, for instance, from KITTI to nuScenes.
- Hierarchical Scene Coordinate Classification and Regression for Visual Localization. CVPR 2020.
- P2-Net: Joint Description and Detection of Local Features for Pixel and Point Matching. ICCV 2021.
- For the ablation study in Sec. 4.6, it is unclear why only sequences 0-1 are used. It would be more beneficial to include all available data in order to thoroughly validate the design choices.
- Typos:
- L267, “mis-registration”?
- L309 “dis-matched”?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the Weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors did not discuss the limitations of their method. It would benefit the reader to include a failure case analysis, which would provide a more comprehensive understanding of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Technical innovations.**
We did get inspiration from previous methods on some loss designs. However, to best of our knowledge, almost no work explored the cross-modality contrastive learning between image features and voxel-point features, where we design a triplet network to learn VoxelPoint-to-Pixel matching to reduce modality domain gap and lead to robust 2D-3D correspondences and registrations. Epro-PnP cannot be directly leveraged in the Image-to-Point Cloud task since the images and LiDAR point clouds are captured in quite different ways. There is a large range of outliers on both modalities where no correspondences can be founded. To handle the outlier regions, we design a detection strategy to predict the probability of lying in the intersection region for each 2D/3D elements, and remove the outlier regions on both modalities before inferring 2D-3D correspondences to solve Probabilistic PnP. Moreover, using the original contrastive equation, the network fails to produce a structured cross-modality latent space to represent both 2D and 3D features as shown in Figure 4. To address this issue, we introduce a loss with adaptive-weighted optimization inspired by Circle loss to explore a distinctive cross-modality latent space and also design a radius-based pair generation strategy to conduct negative and positive pairs, as discussed in Sec. 3.2. By directly introducing the original circle loss without our designed radius-based pair generation strategy, the network also fails to produce a structured cross-modality latent space to represent both 2D and 3D features, and the performance drops from 0.65/2.10/91.14 to 0.96/0.94/88.10 in terms of RTE/RRE/Acc.
In all, we proposed the first end-to-end framework for learning Image-to-Point Cloud registration, which enables a fast inference. Compared to previous state-of-the-art methods, our framework reduces the registration error from 3.59 to 0.75 in terms of RTE. Additionally, it significantly reduces the inference time from 8.96s to 0.19s, demonstrating its promising capabilities in autonomous driving systems.
**Q2: More ablations for designed losses.**
We provide comprehensive ablation studies on each of the designed losses following your suggestions. We first demonstrate the effectiveness of our adaptive-weighted loss by comparing it with other metric learning losses (e.g. ContrastiveLoss, LiftedStructureLoss, GeneralizedLSLoss) in Table E of the rebuttal PDF, where we achieve the best performance among all the other losses. Then, we provide the ablations of replacing our Probabilistic PnP with another differential PnP BlindPnP [1] in Table F of the rebuttal PDF. We further provide the ablation studies to separately explore the impact of KL divergence loss and the pose loss in Table B of the rebuttal PDF, where both losses can improve the performance.
[1] Campbell D, et.al. Solving the Blind Perspective-n-Point Problem End-To-End With Robust Differentiable Geometric Optimization. ECCV2020
**Q3: Related works HSCNet and P2-Net.**
We focus on difference tasks with the mentioned HSCNet and P2-Net. HSCNet focuses on the visual location task which aims to estimate the camera pose of a query image with respect to a known environment, as described in the first sentence of the introduction in HSCNet. We have distinguished our task with the visual location task in L.87-L.92 in the Related Work, where we focus on the image-to-point cloud registration task in dynamic scenes captured at different time steps instead of first pre-building the whole environment and learning to locate a query image. P2-Net learns pixel-point matching for establishing 2D-3D correspondences, and also focuses on the visual location task, which is a different task of ours as described above. We agree that the P2-Net which also learns 2D-3D feature matchings can be leveraged in the Image-to-Point Cloud registration task as ours with some modifications. However, P2-Net does not open source, where no implementations can be found for a fair comparison. Note that the GitHub link provided in the abstract of P2-Net is empty.
**Q4: Generalize the proposed method across datasets.**
We refer the reviewer to ”Global-Q1: Cross dataset validation.“ of the global response for justifying cross dataset validations.
**Q5: The data used for ablation studies.**
We conduct ablation studies to explore each design in our frameworks and some important hyper-parameters like image resolutions, point densities, safe radius, feature dimensions, etc. Using a subset of the large-scale dataset for ablation studies is an efficient way and can justify the effectiveness of the designs.
We leverage the first two sequence of KITTI dataset, which contains about 30% Image-to-Point Cloud pairs of the KITTI dataset, as a subset to conduct comprehensive ablations for efficiently verifying more hyper-parameters and designs.
We agree that conducting ablation studies with the same setting as the main experiment to use the whole KITTI dataset can validate the design choices more thoroughly than only using a subset dataset. We will conduct the ablation studies under the whole KITTI dataset in the revision, due to the limited time in rebuttal period.
**Q6: Limitations and failure cases.**
One of our limitations is that the feature matching errors at some noisy points of LiDAR point clouds may be very large, which have very negative influence on cross-modality registrations. As shown in Figure 3 of the supplementary, although the feature matchings at most of pixels/points are accurate, some feature matching results at noisy points (e.g. scans of bushes) of the scene are not stable. The reason is that the network is limited to handle noisy points without any special designs, leading to unstable 3D features at noisy points and further affect the registration accuracy at some complex scenes. We will add more discussion on the limitations and failure cases of our method in the revision.
---
Rebuttal Comment 1.1:
Title: Comments on Authors' Rebuttal
Comment: I appreciate the authors' effort in addressing most of my concerns, including technical contributions and loss ablations.
One addtional comment, regarding the claim in the response, "almost no work explored the cross-modality contrastive learning between image features and voxel-point features", there is one seemingly related work I am aware of, though it is on transfer learning:
- Liu et al. Learning from 2d: Contrastive pixel-to-point knowledge transfer for 3d pretraining. 2021.
Another question about the loss ablation study: were these new quantative results obtained on all available data of KITTI or still sequences 0-1?
---
Reply to Comment 1.1.1:
Title: Response to Additional Comments
Comment: Thanks for your response and we are happy to hear that our rebuttal helped. We would like to express our sincere gratitude for the constructive comments and valuable time. We respond to your additional questions as follows.
**Q1: Claim in the response.**
We do not claim to be the first to explore cross-modality contrastive learning, which was proposed in CorrI2P, P2-Net or the paper you mentioned [1] to leverage Point-to-Pixel Matching for learning cross-modality patterns. Actually, we claim to propose the first cross-modality contrastive learning framework between **image features and voxel-point features**, where we design a triplet network to learn **VoxelPoint-to-Pixel Matching**, instead of **Point-to-Pixel Matching**. It reduces the modality domain gap and leads to robust 2D-3D correspondences and registrations. As a comparison to the previous “Point-to-Pixel Matching” methods, we analyzed the effectiveness of voxels in “Comparison to Point-to-Pixel Matching” in Sec.3.1 and “The Analysis of VP2P Matching” in Sec.3.1 in the supplementary. We visualized the learned latent space of our VoxelPoint-to-Pixel Matching and quantitatively and qualitatively compared with Point-to-Pixel Matching to demonstrate that using regular voxels can reduce the domain gap between 2D and 3D data.
Our motivation for introducing VoxelPoint-to-Pixel Matching for cross-modality contrastive learning is that irregular points are merely suitable to be processed by MLP to learn representations, while pixels are regular and processed by CNNs. The large differences between points and pixels and the one between calculations in MLPs and CNNs lead to different features domains and make it hard for previous “Point-to-Pixel Matching” works to learn a structured shared latent space for 2D and 3D data. We observe that voxels share much greater similarities with pixels than points since both voxels and pixels are regular and represented in grids, which are suitable to be processed by CNNs to operate spatially-local convolutions. Based on the analysis above, we propose **VoxelPoint-to-Pixel Matching** for reducing the 2D-3D modality domain gap, leading to robust and accurate Image-to-Point Cloud registrations.
**Q2: Data for additional ablations in rebuttal.**
For an intuitive comparison with the previous results, we conduct additional ablations under the same setting as Table 3 in Sec. 4.6 to use sequences 0-1 as the training data. Please note that we conducted extensive quantitative comparisons and ablation studies in the rebuttal, where we found it hard to conduct comprehensive ablations under the whole KITTI dataset due to the limited rebuttal period. We will conduct the ablation studies under the whole KITTI dataset in the revision.
[1] Liu et al. Learning from 2d: Contrastive pixel-to-point knowledge transfer for 3d pretraining. 2021. | Summary: The paper proposes a 2D to 3D registration pipeline using a differentiable PnP method (Epro-PnP) and integrates 3D information using both voxel and point-based features. These designs target on previous problems such as the domain differences when fusing MLP-based point features and CNN-based pixel features, and the non-differentiable post-processing when computing the transformation that cannot train the entire model end-to-end. Experiments on KITTI and nuScenes, along with ablation studies show the effect of the proposed pipeline while achieving great efficiency.
Strengths: - The proposed method improves the current state-of-the-art methods by a large margin in terms of the accuracy and efficiency on KITTI and nuScenes datasets.
- Although most techniques used in the paper are not new, combining these techniques yield effective results.
- The idea of combining both the voxel-based features and the point cloud-based features helps with the domain difference between MLP-based and CNN-based features.
- The writing of the paper is okay and is easy to follow.
Weaknesses: Major:
- In the ablation study Table 3, the authors showed that the differentiable PnP solver improves the accuracy of the method. Could the authors further explain the reasons?
- In Line 287-288, the authors mentioned that they did not exclude extreme cases when reporting the performance. However, the authors could report both the performance with/without extreme cases for a clear and fair comparison to other methods.
- The paper uses non-differentiable PnP to solve pose during inference. What about other methods applying a fast PnP during inference? Will they achieve similar speedups?
- Ablation study (Table 3): how about using 1 or 2 of the components to show a more comprehensive study of the effectiveness of the combination of the module?
- Table 4 better adds efficiency comparison.
- The limitations of the method should be discussed in the main paper.
Minor:
- The attached code does not show many intuitions. A pseudo code is preferred in the text to aid the understanding of the method.
- There are some typos in the paper that need to attend to.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please address my questions/concerns above. Specifically, my main concerns are some unclear/inconsistent arguments and experimental designs.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation of the method is not discussed. Some of the limitations that can be discussed are as follows:
1. The method uses a non-differentiable fast PnP solver during the inference, which is different from the optimization process. How will it affect the performance? And will any fast PnP solver be suitable here? What is the limitation during inference?
2. What is the limitation of the method when applied to different datasets/scenes?
3. How to choose model parameters in order to balance accuracy and efficiency? The authors have included some ablation studies in both the main paper and the supplementary materials. However, the tradeoff between accuracy and efficiency could be further discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Why differentiable PnP can improve the accuracy?**
The main insight that we introduce the end-to-end PnP is to impose supervisions directly on the predicted transformations. Previous works learn 2D-3D feature matching with non-differentiable PnP as a post-processing procedure to estimate the transformations The optimization target during training is only the pseudo supervision conducted from 2D-3D correspondences. The insufficient supervision leads to large errors since the network has no ability to handle incorrect matching pairs which have a highly negative effect on the results. By implementing an end-to-end framework, we are able to bring direct supervisions to the final transformation, which is the most accurate supervision and leads to a more stable and more accurate cross-modality registrations.
**Q2: Comprehensive comparison with baseline methods.**
We refer the reviewer to “Global-Q2: Comprehensive comparison with baseline methods. ” of the global response for the results and analyses.
**Q3: Can fast PnP speedup other methods.**
Actually, we already use the same setting as ours to apply EPnP with RANSAC to predict the rigid transformation at inference time for the SoTA baseline CorrI2P as described in L.283-L.285 of Sec. 4.2. The reason why CorrI2P fails to achieve fast pose inference is that CorrI2P produces lots of wrong matchings which results in wrong 2D-3D correspondences, and requires a large number of iterations for RANSAC to eliminate the wrong correspondences which is very time-consuming. On the contrary, our method produces much better cross-modality matchings and leads to robust 2D-3D correspondences as shown in Figure 6, thus only a few iterations are required for RANSAC to filter wrong correspondences. The other baseline DeepI2P performs the frustum classification where no 2D-3D correspondences are conducted and is not suitable for EPnP to solve the final pose estimation. Specifically, DeepI2P solves the time-consuming inverse camera projection problem for pose estimation, even with a 60-fold pose initialization for avoiding crashing, leading to long pose inference time. We also justify that our method requires much fewer parameters than the previous SoTA methods (e.g. CorrI2P and DeepI2P) as shown in Table 2, leading to much shorter time for network inference.
**Q4: Comprehensive ablations on the designs.**
We provide the comprehensive ablations on our framework designs in Table H of the rebuttal PDF. We also report the performance using one, two or three of our designs at the same in Table H of the rebuttal PDF, where the results comprehensively demonstrate the effectiveness of each of our design.
**Q5: Add efficiency comparison to Table 4.**
We add the efficiency comparison in Table 4 and provide the full results in Table G of in the rebuttal PDF.
**Q6: Pseudo code for proposed method.**
We provide the pseudo code for the training and testing progress of our propose method in Algorithm 1 and 2 in the rebuttal PDF.
**Q7: What is the limitation during inference? How will leveraging EPnP for inference affect the performance? Will other fast PnP suitable here?**
The main limitation during inference is the time spent, which determines the potential of our method to be applied in automatic driving scenarios where the low latency is required. We replace the Epro-PnP with EPnP during inference to reduce the pose inference time. We further test the performance of directly leveraging Epro-PnP to predict poses, which achieves similar performance (RTE = 0.74) with EPnP (RTE = 0.75) on the KITTI dataset. However, as reported in L.340-L.341, the inference time will increase from 0.19 s to 2.38 s if we use Epro-PnP. The reason is that Epro-PnP adopt a Gaussian-Newton algorithm-based iterative PnP solver with a time complexity of O($N^2$), while the EPnP is much faster with a time complexity of O($N$). Therefore, to enable fast registration, we leverage EPnP as the solver at the inference time. EPnP is the most widely-used and well-explored method for efficient pose estimation from correspondences with a time complexity of O(N), therefore we choose EPnP instead of other PnP solvers for fast and robust registration at inference time.
**Q8: Limitations when applied to different datasets/scenes.**
Our method shows great robustness and generality when getting applied in different datasets (e.g. KITTI and nuScenes), where we achieve more accurate registration results on both KITTI and nuScenes datasets as shown in Table 3 of the main paper. However, previous SoTA methods CorrI2P and DeepI2P show a performance decline in nuScenes. The result demonstrates that our method is more general than previous methods and achieves accurate performance in different dataset. We also made further discussions in the “KITTI vs. nuScenes” in Sec.3.2 of the supplementary.
We further justify that in our experiments setting, each dataset is split among sequences which are collected in different scenes, and the test sequences are unseen scenes for the trained model.
Therefore, our performance on KITTI or nuScenes dataset can already demonstrate the ability of our methods to generalize to unseen scenes.
For more limitations of our method, please refer to “Global-Q3” and “Global-Q1” of the global respond.
**Q9: How to choose model parameters for balancing accuracy and efficiency?**
We mainly consider the accuracy when choosing model parameters and also take into account efficiency for the model lightweight and faster convergence. As shown in Table G of the rebuttal PDF, we observe that increasing image resolutions can greatly improve the performance (e.g. from 0.84/2.90 to 0.65/2.10 in terms of RTE/RRE), therefore we choose to use higher image resolution with acceptable computational cost. When increasing the point density, only a marginal improvement is achieved (e.g. from 0.71/2.01 to 0.60/2.09), therefore we choose a proper density with fine performance and efficiency.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: Thank the authors for providing additional information and experiments. I have read all the comments of the authors and the reviewers, and I appreciate that the authors could further add the ablation studies and the limitation discussions.
Most of my concerns were addressed. I did not agree with the authors' response about the generalizability of the proposed method. However, the additional Table C may provide an example of the generalization of the method. The limitations I have listed were just some examples. I hope the authors could discuss this in the main text.
In conclusion, I encourage the authors to improve the paper based on all the comments from the reviewers.
---
Reply to Comment 1.1.1:
Title: Thanks for the comments
Comment: Dear reviewer yFFc,
We will follow your advice to update our manuscript. Thanks for your effort and time, and we really appreciate your expertise.
Best,
The authors | Summary: This paper proposes a method to register an image with its nearby Lidar scan, and it comes with three modules:
1) A sparse 3D conv-net and point-net to extract 3D features; A 2D conv-net to extract 2D features;
2) An intersection detection module to discard non-matchable 2D and 3D points;
3) Modified Circle, Probabilistic PnP, and pose losses are used to train the proposed triple network.
Strengths: 1) This paper is well-written and easy to follow;
2) This paper makes good practice to combine multiple existing modules to address the image2Lidar registration problem;
3) According to Table 1, the proposed method outperforms previous works addressing the image2Lidar registration problem.
Weaknesses: Though this paper successfully combines multiple existing modules, some design choices are yet to be investigated. Specifically,
1) Generalization ability.
According to the paper, the proposed method focuses on autonomous driving datasets and conducts experiments on 3-DoF registration problem (Line 267). I have a concern that the network overfits on this specific configuration. To address my concern, please conduct the following experiments:
a) Cross-validation. Using a network trained on the KITTI dataset to test on the nuScenes dataset;
b) Conducting 6-DoF registration, rather than 3-DoF registration. Please at least add rotations around the x-axis and y-axis;
2) Pipelines.
a) The effectiveness of the modified Circle Loss. Please add the comparison with respect to the original Circle Loss in Table 3;
b) Since metric-learning plays an important role in this paper, I would expect authors to conduct comparisons with off-the-shelf metric-learning losses other than the Circle Loss. Please refer to https://github.com/KevinMusgrave/pytorch-metric-learning;
c) The effectiveness of the Probabilistic PnP and pose loss. I would expect a two-stage training strategy here by first training the network by only using the adaptive-weighted loss, and then adding the Probabilistic PnP and pose loss. This reminds me of the work [Solving the Blind Perspective-n-Point Problem End-To-End With Robust Differentiable Geometric Optimization]. Please separate the impact of KL divergence loss and pose loss, and check their effectiveness independently in Table 3. It would be nice to further compare the proposed Probabilistic PnP with respect to the differentiable PnP module in [Solving the Blind Perspective-n-Point Problem End-To-End With Robust Differentiable Geometric Optimization].
3) Minor.
I think processing 2D and 3D data with convolutions may not justify the claim of "Sharing similar characteristics in feature space" (Line 162), as 3D convolution is invariant to shift, viewpoint, distance, etc. In contrast, 2D convolution is vulnerable to viewpoint and scale differences.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Cross dataset validation.**
We refer the reviewer to ”Global-Q1: Cross dataset validation.“ of the global response for justifying cross dataset validations.
**Q2: Conducting 6-DoF registration, rather than 3-DoF registration.**
We follow previous methods (e.g. CorrI2P and DeepI2P) to conduct 3-DoF registration for a fair comparison. However, we justify that our proposed method is designed to match cross-modality features for image-to-point cloud registration and is not limited to the certain 3-DoF problem. We conduct 6-DoF registration under KITTI dataset to demonstrate the ability of our method in handling more difficult situations. Specifically, we conduct 6-DoF mis-registration transformation with 3D translations on x-axis, y-axis, and z-axis within $\pm$ 5 m, and rotations around the x-axis, y-axis and z-axis within $\pm$ 120$^o$. We report our performance and make a comparison with previous SoTA method CorrI2P in Table D in the rebuttal PDF. The results demonstrate that our method can also work in 6-DoF image-to-point cloud registration task with large mis-registrations where we achieve accurate registration performance of 0.96/3.73/76.7 in terms of RTE/RRE/Acc., while previous SoTA method CorrI2P fails at the more difficult task and produces much worse results (4.29/12.28/42.53) than ours.
**Q3: Comparison with original Circle loss.**
We provide the comparison with original Circle loss in “Original CircleLoss” in Table E in the rebuttal PDF, where the performance drops from 0.65/2.10/91.14 to 0.96/2.94/88.10 in terms of RTE/RTE/Acc without our designs on the adaptive weighted optimization.
**Q4: Comparisons with off-the-shelf metric-learning losses.**
We make a comparison with some off-the-shelf metric-learning losses in Table E in the rebuttal PDF. The implementation of these losses are from the GitHub repo as you suggested. The results demonstrate the effectiveness of our proposed adaptive-weighted optimization where we achieve the best performance among all the other losses (e.g. ContrastiveLoss, LiftedStructureLoss, GeneralizedLSLoss). The reason is that other metric-learning losses treat each pair of samples equally and are unable to distinguish hard and easy pairs, which leads to ambiguous convergence, especially in the difficult cross-modality matching. While we design a flexible optimization strategy with adaptive weighting to force the network to focus more on the harder samples, leading to a distinctive cross-modality latent space where we can establish 2D-3D correspondences more accurately. We further justify that our adaptive-weighted strategy is a general term that can be leveraged in improving other metric-learning losses, where we integrate the strategy to GeneralizedLSLoss and report the performance as “GeneralizedLSLoss-AW”.
**Q5: The effectiveness of the Probabilistic PnP and pose loss.**
We provide the ablation studies on Probabilistic PnP and pose loss in Table F of the rebuttal PDF, following your suggestions. We first report the performance of a two-stage training strategy as “TwoStage (w/ Ours PnP)” by first training the network only using the adaptive-weighted loss for half epochs and then adding the Probabilistic PnP losses for the rest epochs. The two-stage strategy outperforms the result of using only adaptive-weighted optimization as “w/o Diff. PnP” in Table F (0.69 vs. 0.75) and is slightly worse than the result of our one-stage training with Probabilistic PnP shown as “w/ Ours PnP” in Table F (0.69 vs. 0.65). We further provide the ablation studies to separately explore the impact of KL divergence loss and pose loss as shown in Table B of the rebuttal PDF, where both losses can improve the performance.
**Q6: Compare Probabilistic PnP with BlindPnP.**
We provide the ablations of replacing our Probabilistic PnP with BlindPnP [1] in Table F of the rebuttal PDF, where BlindPnP performs worse than our Probabilistic PnP (0.73 vs. 0.65). By introducing supervisions on the predicted pose distribution, the Probabilistic PnP brings more robust guidance for optimizations than BlindPnP which only leverages the L2 loss to guide the pose learning.
[1] Campbell D, et.al. Solving the Blind Perspective-n-Point Problem End-To-End With Robust Differentiable Geometric Optimization. ECCV2020
**Q7: Can processing 2D and 3D data with convolutions reduce the domain gap?**
Our motivation is that irregular points are merely suitable to be processed by MLP to learn representations, while pixels are regular and processed by CNNs. The large differences between points and pixels and the one between calculations in MLPs and CNNs lead to different features domains and make it hard for previous works to learn a structured shared latent space for 2D and 3D data.
We observe that voxels share much greater similarities with pixels than points since both voxels and pixels are regular and represented in grids, which are suitable to be processed by CNNs. We agree that 2D and 3D CNNs also have some differences, but they share similar operation to perform convolutions on regular data (pixels or voxels) represented in grids to explore spatially-local patterns, which is quite different to the feature patterns obtained by performing MLPs in irregular point sets. We analyzed the effectiveness of introducing voxel information in “Comparison to Point-to-Pixel Matching” of Sec.3.1 and “The Analysis of VP2P Matching” of Sec.3.1 in the supplementary. We visualized the learned latent space of our VoxelPoint-to-Pixel Matching and quantitatively and qualitatively compared with Point-to-Pixel Matching to demonstrate that using regular voxels can reduce the domain gap between 2D and 3D data.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal and new experiments.
They addressed my comments well, and I would update my score to borderline accept.
Please reflect these new experiments in the revised paper.
---
Reply to Comment 1.1.1:
Title: Thanks to Reviewer QyPU
Comment: Dear Reviewer QyPU,
Many thanks for all the helpful comments and positive assessment, we will add the new experiments in the revised paper. We really appreciate you for upgrading the score.
Best,
Authors | Rebuttal 1:
Rebuttal: We upload a rebuttal PDF with some experimental results requested by the reviewers. For the following rebuttals, we use “rebuttal PDF” to point to the provided PDF like “in Table A of the rebuttal PDF”.
We respond to some common questions in reviews as follows.
**Global-Q1: Cross dataset validation.**
Almost all the widely-used methods (e.g. Cylinder3D[1], CenterPoint[2])in 3D perception (e.g. segmentation / detection) train a single model for a dataset, i.e. one model for KITTI and one model for nuScenes, and do not perform cross dataset validation. The reason is that cross dataset validation is extremely difficult for 3D perception tasks of autopilot, since the devices (e.g. LiDARs and cameras) and the ways to collect data are quite different from dataset to dataset (e.g. KITTI and nuScenes). This leads to large differences of predicted feature distributions when apply a model trained on one dataset to directly leverage it to evaluate on another dataset. However, we justify that the experiments under a single dataset can already demonstrate the ability of our methods to generalize to unseen sequences or scenes. Specifically, each dataset (e.g. KITTI, nuScenes) is split among sequences and different sequences are collected in different scenes, therefore the test sequences are unseen scenes for the trained model.
Moreover, we justify that our trained model in a single dataset can learn some underlying patterns that can be shared across different datasets. Specifically, we leverage our pretrained model of KITTI dataset to conduct few-shot registration experiments under nuScenes dataset. The model is finetuned using only 10 % samples in nuScenes training set, and is evaluated on the full test set. The result is shown in “$Ours_{Scratch}$” and “$Ours_{KITTIPretrain}$” in Table C of the rebuttal PDF, where fine-tuning the pretrained model of KITTI dataset on the small subset of nuScenes dataset significantly outperforms the result of training a randomly initialized model from scratch ,i.e. 1.79 vs. 2.72 in terms of RTE.
[1] Zhu X, et al. Cylindrical and asymmetrical 3d convolution networks for LiDAR segmentation. CVPR2021
[2] Yin T, et.al. Center-based 3d object detection and tracking. CVPR2021
**Global-Q2: Comprehensive comparison with baseline methods.**
We provide the comprehensive comparison with baseline methods (e.g. CorrI2P and DeepI2P) under the same setting of CorrI2P to exclude RTE larger than 5m and RRE larger than 10$^o$ in Table A of the rebuttal PDF.
To further provide a more real and convincing comparison with SoTA method CorrI2P in the performance without extreme cases, we remove the same number of bad samples as CorrI2P and report the performance of our method as “Ours *”. By evaluating on the same number of test samples without extreme cases, we believe it is a relative fair comparison with CorrI2P. As shown in Table A of the rebuttal PDF, we achieve the best performance under all metrics. Especially, our method is about 3 times better than the SoTA baseline CorrI2P under the difficult benchmark nuScenes, even after removing the extreme samples. The result demonstrates that our method not only produces more stable registrations with much fewer failure cases but also produces much precise registrations for the success samples.
**Global-Q3: Limitations and failure cases.**
One of our limitations is that the feature matching errors at some noisy points of LiDAR point clouds may be very large, which have very negative influence on cross-modality registrations. As shown in Figure 3 of the supplementary, although the feature matchings at most of pixels/points are accurate, some feature matching results at noisy points (e.g. scans of bushes) of the scene are not stable. The reason is that the network is limited to handle noisy points without any special designs, leading to unstable 3D features at noisy points and further affect the registration accuracy at some complex scenes. We will add more discussion on the limitations and failure cases of our method in the revision.
Pdf: /pdf/9ac7567439436e01adc4a847e1410082e555fc86.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes to learn a structured cross-modality latent space to represent pixel features and 3D features via a differentiable probabilistic PnP solver, which designs a triplet network to learn VoxelPoint-to-Pixel matching. The proposed method is trained in end-to-end manner by imposing supervisions directly on the predicted pose distribution with a probabilistic PnP solver. The experiments seem good.
Strengths: (1). A framework to learn Image-to-Point Cloud registration by learning a
structured cross-modality latent space with adaptive-weighted optimization, together with
an end-to-end training schema driven by a differentiable PnP solver.
(2) The paper represents the 3D elements as the combination of voxels and points to
overcome the pattern gap between points and pixels, where a triplet network is designed to learn VoxelPoint-to-Pixel matching.
Weaknesses:
1. The motivation that use voxel information is unclear, as described in the paper, one of the bottleneck of previous methods is points and pixels are with different characteristics with patterns learned in different manners (MLP and CNN), however, domain gap is also exited between voxeled information and pixels, why does the paper use voxel information should be discussed.
2. The ablation studies are not enough,
(1). in 3.2 Adaptive-Weighted Optimization, a hyper-parameter radius r is used, which is should be discussed, because it affects the positive and negative pairs, experiments should be provided to verify the robustness.
(2). In probabilistic PnP, the effectiveness of L_{pose} and L_{kl} should be verified separately.
(3). I agree that the settings in CorrI2P is unsuitable that exclude RTE larger than 5m and RRE larger than 10◦, however, it is better to provide these results same with CorrI2P, since it would be better for readers to understand where does main improvement come from.
(4). Limitatations and failure cases should be discussed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations and failure cases should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Motivation of introducing voxel branch.**
Our motivation is that irregular points are merely suitable to be processed by MLP to learn representations, while pixels are regular and processed by CNNs. The large differences between points and pixels and the one between calculations in MLPs and CNNs lead to different features domains and make it hard for previous works to learn a structured shared latent space for 2D and 3D data. We observe that voxels share much greater similarities with pixels than points since both voxels and pixels are regular and represented in grids, which are suitable to be processed by CNNs to operate spatially-local convolutions. We analyzed the effectiveness of introducing voxel information in “Comparison to Point-to-Pixel Matching” of Sec.3.1 and “The Analysis of VP2P Matching” of Sec.3.1 in the supplementary. We visualized the learned latent space of our VoxelPoint-to-Pixel Matching and quantitatively and qualitatively compared with Point-to-Pixel Matching to demonstrate that using regular voxels can reduce the domain gap between 2D and 3D data.
**Q2: Ablations on hyper-parameter radius $r$.**
We have provided the ablation studies on the safe radius $r$ in Sec.3.3 in the supplementary. We set the safe radius $r$ to 0.5, 1, 2 and 4 pixels, and report the performances in Table 2 of the supplementary.
**Q3: The effectiveness of $L_{pose}$ and $L_{KL}$.**
We conducted the ablations to verify $ L_{KL}$ and $ L_{pose}$ separately in Table B in the rebuttal PDF. We observe that when removing the probabilistic PnP loss $ L_{KL}$ in Eq. (7) or the pose loss $L_{pose}$ in Eq. (8) separately, the RTE (lower is better) rise from 0.65 to 0.72 / 0.68. And when further removing both of them, the RTE further degenerates to 0.75. These results demonstrate that the probabilistic PnP loss in Eq. (7) brings major improvement to the registration accuracy and the pose loss in Eq. (8) provides additional enhancements.
**Q4: Comprehensive comparison with baseline methods.**
We provide the comprehensive comparison with baseline methods (e.g. CorrI2P and DeepI2P) under the same setting of CorrI2P to exclude RTE larger than 5m and RRE larger than 10$^o$ in Table A of the rebuttal PDF.
To further provide a more real and convincing comparison with SoTA method CorrI2P in the performance without extreme cases, we remove the same number of bad samples as CorrI2P and report the performance of our method as “Ours *”. By evaluating on the same number of test samples without extreme cases, we believe it is a relative fair comparison with CorrI2P. As shown in Table A of the rebuttal PDF, we achieve the best performance under all metrics. Especially, our method is about 3 times better than the SoTA baseline CorrI2P under the difficult benchmark nuScenes, even after removing the extreme samples. The result demonstrates that our method not only produces more stable registrations with much fewer failure cases but also produces much precise registrations for the success samples.
**Q5: Limitations and failure cases.**
One of our limitations is that the feature matching errors at some noisy points of LiDAR point clouds may be very large, which have very negative influence on cross-modality registrations. As shown in Figure 3 of the supplementary, although the feature matchings at most of pixels/points are accurate, some feature matching results at noisy points (e.g. scans of bushes) of the scene are not stable. The reason is that the network is limited to handle noisy points without any special designs, leading to unstable 3D features at noisy points and further affect the registration accuracy at some complex scenes. We will add more discussion on the limitations and failure cases of our method in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal, most of our concerns are addressed. Though the motivations should be further discussed in the following versions of the paper, the experiments can prove the proposed point to some extent, so I update my score to weak accept.
---
Reply to Comment 1.1.1:
Title: Rating clarification inquiry
Comment: Dear Reviewer CqXY,
We would like to express our sincere gratitude for the constructive comments you provided on our work. Your insights are invaluable, and we are committed to incorporating your suggestions and will further discuss the motivations in the following versions of the paper.
We do, however, have a slight query that we hope you can kindly clarify. We noted that your previous score was already "borderline accept". In light of your recent positive feedback, are you suggesting to "update the score to weak accept" instead of "update the score to borderline accept"? Your clarification would be greatly appreciated.
Thank you once again for your thoughtful feedback and time invested in evaluating our work.
Best regards,
Authors | null | null | null | null | null | null |
Connected Superlevel Set in (Deep) Reinforcement Learning and its Application to Minimax Theorems | Accept (poster) | Summary: Authors found novel discoveries in policy optimization problems: 1.the superlevel set of the objective function related to the policy parameter is always a connected set and the optimization objective as a function of the policy parameter and reward satisfies a stronger “equiconnectedness” property. Based on the discoveries, authors derive a novel minimax theorem for a robust RL problem.
Strengths: Well written and well organized.
Novel discoveries of policy optimization problems in reinforcement learning.
Derived a novel minimax theorem for a robust RL problem which may contribute to the development of novel robust algorithm in the future. Theoretical analysis and proofs are nicely presented.
Weaknesses: Checking math is not my strength so I'll refrain from providing my opinion on the math. I think one weakness is about the application side of the paper and lack of any experiments(maybe just a toy example?).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Would it be possible to design a toy env and run some small experiments to explicitly show how the derived theorem can help develop better algorithm in addition to the reward poisoning example?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Again, checking math is not my strength so I'll refrain from providing my opinion on the math. One limitation is lack of experiments and application.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reviewing the paper. We agree that it would be good to complement the theory with simulations in some way, but so far the results mostly focus on the fundamental structure of the optimization problem, and it is unclear what simulations would be meaningful. We do want to point out that our theoretical results can lead to the design of better algorithms as we discuss in the response to Reviewer ceyJ. | Summary: This work shows the superlevel set of the objective function in reinforcement learning is always equiconnected for both tabular policy and neural policy. An application of the connected property is the minimax theorem. As a consequence, reward attack robust RL can be shown to have Nash equilibrium.
Strengths: 1. The paper show the superlevel set of the RL objective is connected. Furthermore, the collection of the objective functions under the tabular and the neural policy is equiconnected.
2. The connected property is used to establish the minimax theorem.
3. As a corollary, the authors show that reward attack robust RL problems have Nash equilibrium. This is the first work that shows such a result.
4. The paper is well-written.
Weaknesses: 1. Although Theorem 1 and 2 are general results that cover a large class of policies, it is not clear whether the minimax theorem can be used to show the existence of Nash equilibrium for other robust RL problems, e.g., when the agent is uncertain of the transition kernels.
2. It is unclear whether the theoretical results can help design algorithms for robust RL problems.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have some minor questions:
1. What is the intuition behind Assumption 2? Why do we need the non-zero scalar condition?
2. Can we use the equiconnected property to prove minimax theorems other than Theorem 3?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors address limitations in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the feedback, which we will incorporate when making the next revision. We confirm that our results can be used to drive the design of algorithms (please see our response to Reviewer ceyJ above).
The technical reason for considering Assumption 2 is to make the activation function invertible and its inverse unique, which ensures that the activation function does not break the connected path constructed in the analysis. A sufficient condition that guarantees Assumption 2 is that the activation function is 1) monotonically increasing or decreasing and 2) piecewise linear.
Regarding the question on whether the equiconnectedness property can be used to derive results other than Theorem 3, we would like to point to the pseudo-linearity condition that Reviewer efCX brought up from a recent work [Jin 2022]. In the tabular case, this condition gets implied by our results and analysis of Theorem 1, and it plays an important role in the algorithm design process in [Jin 2022]. Theorem 2 of our paper can be used to establish a neural network version of the pseudo-linearity condition, which may be helpful in extending their algorithm to the function approximation setting.
References
Jin, Yujia, Vidya Muthukumar, and Aaron Sidford. "The complexity of infinite-horizon general-sum stochastic games." arXiv preprint arXiv:2204.04186 (2022).
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. My questions have been addressed. | Summary: This paper aims to enhance the comprehension of the optimization landscape in reinforcement learning (RL) for policy optimization problems. The primary contribution of this work is to demonstrate the connectedness of the superlevel set of the policy optimization problem in RL under a tabular policy representation. Furthermore, the authors establish that the superlevel set of the objective function, considering the policy parameters (i.e., weights of the neural networks), remains connected across all levels. The authors also illustrate the practical implications of their main findings by deriving a minimax theorem for a specific class of robust RL problems.
Strengths: - The paper studies an exciting area of research on optimization applied to policy learning in reinforcement learning.
- The paper is generally well written and coherent.
Weaknesses: - The paper briefly talks about robust RL as a potential application of studying super level connectedness. It does not provide any practical robust RL algorithm.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - I would like the authors to include an explanation for why one needs to expand their understanding of RL optimization landscape beyond gradient domination. To motivate the problem to the general conference audience, it is important to list additional insights one might gain from studying super level set connectedness (SLSC). For example, does SLSC help identifying bottlenecks, regions of poor convergence, or potential areas for algorithmic improvement?
- It is also not clear to me what were the exact challenges in establishing results for deep RL?
- Are the results in Sec 2 similar to Nguyen [2019]? If not, what are the exact differences?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors are clear on the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to read the paper and for providing important feedback, which we will carefully consider and incorporate in the next revision.
First of all, we confirm that studying SLSC can inform algorithm design. As we discuss in the last paragraph of page 5, given any two policies, our results provide us with a tool to generate a spectrum of policies that interpolate their values. This means that we can generate a continuum of optimal policies if we find two (possibly by gradient descent from different initializations). If the agent has a secondary preference over these policies (for example, some policies are easier to implement on the physical actuator), an eventually more preferred policy can be selected. In addition, in the paper [Jin 2022] pointed out by Reviewer efCX, the authors introduced a pseudo-linearity structure, which is weaker than and gets implied by our result in Theorem 1. This structure plays an important role in their algorithm design process, which strengthens our belief that the results on connected superlevel sets can inspire and guide the design of future algorithms that we may not foresee at this point.
In the broad minimax optimization and game theory literature, knowledge on the existence of Nash equilibrium (NE) guides the design of algorithms and helps researchers understand the limit of any algorithms that can be designed. In nonconvex-nonconcave minimax optimization problems, global NE may not always exists, and weaker notions of optimality have been introduced including local NE [Daskalakis 2018, Mazumdar 2018], coarse correlated equilibria [Muller 2022, Mao 2023], and local/global min-max equilibria [Jin 2020, Vamvoudakis 2023]. Algorithms that search for these alternative solutions are designed by exploiting their specific structure, which may not be optimal in the NE sense even if the existence of NE is established later on.
As we discussed in the related works section, our analysis and network architecture in the deep RL setting are inspired by [Nguyen 2019], which studies the optimization landscape of a supervised learning problem with a convex objective function. Assumptions on the piecewise linearity and monotonicity of the activation functions are required in [Nguyen 2019]. As our objective is a non-convex value function and the last layer of our neural network has to use a nonlinear, non-monotone softmax activation function to produce a valid probability distribution, important innovations need to be made to handle the activation function and the interfacing between the neural network and the policy optimization objective. The analysis of the first and last layer of our neural networks especially reflect the innovation.
References
Daskalakis, Constantinos, and Ioannis Panageas. "The limit points of (optimistic) gradient descent in min-max optimization." Advances in neural information processing systems (2018).
Mazumdar, Eric, Lillian J. Ratliff, and S. Sastry. "On the convergence of gradient-based learning in continuous games." arXiv preprint arXiv:1804.05464 (2018).
Nguyen, Quynh. "On connected sublevel sets in deep learning." In International conference on machine learning, pp. 4790-4799. PMLR, 2019.
Jin, Chi, Praneeth Netrapalli, and Michael Jordan. "What is local optimality in nonconvex-nonconcave minimax optimization?." In International conference on machine learning, pp. 4880-4889. PMLR, 2020.
Muller, Paul, Romuald Elie, Mark Rowland, Mathieu Lauriere, Julien Perolat, Sarah Perrin, Matthieu Geist, Georgios Piliouras, Olivier Pietquin, and Karl Tuyls. "Learning Correlated Equilibria in Mean-Field Games." arXiv preprint arXiv:2208.10138 (2022).
Jin, Yujia, Vidya Muthukumar, and Aaron Sidford. "The complexity of infinite-horizon general-sum stochastic games." arXiv preprint arXiv:2204.04186 (2022).
Mao, Weichao, and Tamer Başar. "Provably efficient reinforcement learning in decentralized general-sum markov games." Dynamic Games and Applications 13, no. 1 (2023): 165-186.
Vamvoudakis, Kyriakos G., Filippos Fotiadis, João P. Hespanha, Raphael Chinchilla, Guosong Yang, Mushuang Liu, Jeff S. Shamma, and Lacra Pavel. "Game theory for autonomy: From min-max optimization to equilibrium and bounded rationality learning." In 2023 American Control Conference (ACC), pp. 4363-4380. IEEE, 2023. | Summary: This work studies the connectedness in (deep) reinforcement learning. First, the authors show that the superlevel set of average reward objective in reinforcement learning is connected under both tabular and over-parameterized policies. The objective is shown to satisfy a stronger equiconnectedness property. Second, the authors use the results to get minimax theorems for robust reinforcement learning. In particular, they show that show that minimax problems with convex functions on one side and equiconnected functions on the other side observes the minimax equality (i.e. has a Nash equilibrium).
Strengths: The superlevel set of average reward objective in reinforcement learning is connected seems a novel and interesting result.
The results holds for over-parameterized neural networks, and find applications in minimax problems.
Weaknesses: Some simulations could be used to verify the results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The authors showed that gradient dominance and connectedness do not imply one another. This raises the question of whether connectedness properties of objective functions allow any algorithms to find optimal solutions?
2. Related question: It seems that in Figure 1, the left example is easy for optimization (separated global maximizers), while the right example seems to be harder for optimization (stationary points which are not global maximizer). Does this imply that comparing to gradient dominance, connectedness is a less favourable property for optimization?
3. There are existing results showing that in game settings, a property called pseudo-linearity is satisfied, i.e., there are monotonically increasing path between two policies, see Theorem 1 of https://arxiv.org/abs/2204.04186. I am curious if there is any relation between the connectedness in this work and pseudo-linearity in the above paper?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This is a mostly theoretical work. There is no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for bringing to our attention this highly relevant pseudo-linearity structure. From the way we construct the path map in the proof of Theorem 1, it is not difficult to see that our result implies the pseudo-linearity, but pseudo-linearity does not imply connected superlevel sets as it lacks the sense of connectedness. The pseudo-linearity structure plays an important role in the algorithm design process in the paper [Jin 2022], which strengthens our belief that the results on connected superlevel sets can inspire and guide the design of future algorithms that we may not foresee at this point.
Besides its potential application to algorithm design, our result in Section 4 discusses something more fundamental. The connectedness of superlevel sets allows us to derive the existence of a globally optimal solution (global Nash equilibrium), which in general may not exist for a nonconvex-concave minimax optimization problem. Knowing that the solution exists is a prerequisite before any algorithms can be designed to find the solution. When the existence of Nash equilibrium is unclear, we usually need to compromise by considering weaker notions of optimality. Please see our response to Reviewer ceyJ for a short list of alternative local/global optimality notions that have been proposed in nonconvex-concave and nonconvex-nonconcave minimax optimization.
References
Jin, Yujia, Vidya Muthukumar, and Aaron Sidford. "The complexity of infinite-horizon general-sum stochastic games." arXiv preprint arXiv:2204.04186 (2022). | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Anonymous Learning via Look-Alike Clustering: A Precise Analysis of Model Generalization | Accept (poster) | Summary: This work considers linear regression on data $(x_i, y_i)_{i=1}^n$ where each $x_i \in \mathbb{R}^d$ is sampled from one of $k$ Gaussian clusters, each with probability $\pi_k$, and $y_i = x_i ^T \theta_0 + \varepsilon_i$ with $\varepsilon_i \sim N(0, \sigma^2)$ representing noise. The goal of this paper is to characterize and compare the out-of-sample expected squared error of the minimum 2-norm regression predicter with that of the so called ``look-alike" predicter. The look-alike predicter is the minimum 2-norm predicter after one replaces the first $p$ features of each point with the first $p$ features of their respective cluster average: to motivate this consider the first $p$ features as sensitive in some manner, aggregation therefore provides a degree of anonymization for each data point. The min-norm and look-alike predicter are then analyzed in the asymptotic regime $d,p,n \rightarrow \infty$ with $d/n \rightarrow \varphi_d$ and $p/n \rightarrow \varphi_p$. The min-norm and look-alike estimators are said to be in the underparameterized regime if $\varphi_d\leq 1$ and $\varphi_d - \varphi_p \leq 1$ respectively, and are otherwise considered overparameterized. In addition, if $\theta_s$ denotes the first $p$ (sensitive) entries of $\theta_0$, then the Signal to Noise Ratio SNR is defined as $||\theta_s || / \sigma$. This work identifies a number of scenarios in which the look-alike predicter outperforms the min-norm predicter, in particular when the SNR is low then the underparmaterized look-alike predicter outperforms the min-norm predicter.
Strengths: This paper illustrates, albeit in a specific setting, that anonymization by aggregating sensitive features does not necessarily lead to a drop in the performance of the predictor, and can in certain settings actually lead to improvements. The results appear technically sound and are based on a nice use of the Convex-Gaussian Minimax Theorem. I did not go through all of the appendix or re-derive any of the results line-by-line. Synthetic data is used to support and validate the results numerically. The paper is also well written and structured. I am not an expert in this space so cannot comment really on the novelty of the work.
Weaknesses: The biggest weaknesses to my mind is perhaps the specificity of the data model, both in terms of the data model. Little discussion as to the feasibility of extending the takeaways to more general settings is provided. As a small note I also think greater care is perhaps required when discussing the informal notion of user privacy used here versus more formal notions, e.g., differential privacy, which actually provide privacy guarantees.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can you comment as to the potential avenues and barriers to relaxing your results from asymptotic to big-O?
2. Can you comment as to the potential for your results to be extended to more general data distributions?
3. In Figure 3a) when $\varphi_p=0.9$ then $p=d=0.9n$ and all features are sensitive. If in addition the SNR is $0$, then unless I am mistaken this implies $\theta_0 = 0 $ and therefore $y_i = \varepsilon_i$, i.e., the target is independent of the input features and is random noise. It seems curious then that one classifier has a better generalization than another in this setting, could you comment?
3. Have you observed the conclusions of your results to actually hold on any real-world data? In particular have you identified any real world data problems where you actually see the look-alike estimator out-perform the min-norm estimator?
4. I wonder if the proportional regime is the most appropriate to consider here for actual applications versus say fixed $d$ and $p$, for instance when considering financial or health records in a country. Could you comment on the how your results might change in this setting?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: A section discussing limitations of the work is lacking. I cannot foresee any negative societal outcomes from this work. Although it seems only fairly basic numerics are presented a link to the code used to generate the plots is lacking and should probably be included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and review our paper!
## General comments on weaknesses:
1) **Limitations of the model**: We expect that the virtue of look-alike modeling on generalization to apply in a broad range of settings. The high-level intuition is that the look-alike modeling acts as a regularizer and can improve the generalization by reducing the variance, an observation that is not limited to linear regression. To support it empirically, in our **global response** we provide additional experiments for a nonlinear data model and empirically observe a similar phenomena (i.e., look-alike modeling achieves better generalization over the vanilla estimator at lower SNR). In terms of theory, a similar approach based on CGMT can be used to handle other loss functions, albeit leading to a much more complicated formula for the generalization, which is less transparent in showing the message. A limitation in our analysis is though the distribution of features, as the CGMT strongly relies on gaussianity of the features.
2) **Privacy measures**: We will add a discussion on other privacy measures, including differential privacy, aggregate learning and k-anonymity. Note that in our setting if the minimum size cluster is k, then after look-alike modeling we have k-anonymity in the sense that for any individual there are at least k-1 other users with the same sensitive features.
## Response to questions:
1) The proportional regime (where n,d, p are of the same order) studied in the paper is indeed much more challenging than the population regime where the sample size $n\gg d,p$. In the population regime, the variance of the estimator goes to zero and we are essentially left with a deterministic estimator. However, as we mentioned in the paper, the population regime is not relevant to practical situations as many ML systems (in particular neural nets) have a humongous number of parameters, on the order of number of training samples.
2) Please see our response above (#1 in weakness)
3) Yes, the target in this case is just random noise, but note that the two estimators are using different features to fit to this noise vector (one uses individual features and one uses cluster centers), so you will naturally have different estimators and so different risks.
4) We have indeed observed this phenomena on a click-through-rate dataset of users on a pool of ads. However, due to privacy of users we are not permitted to share that data.
5) Although our theory is for an asymptotic regime, as we show in the simulations, when $n,d,p$ are only a few hundreds we already see a great match between theory and simulations.
---
Rebuttal Comment 1.1:
Comment: Thanks for answering my questions, on balance I will keep my current score. | Summary: In this paper the authors propose a look-alike clustering technique that replaces sensitive feature of individuals with the cluster’s average value – the cluster in which that individual belongs to. The authors provide precise analysis of how replacing sensitive features with cluster center value affect the generalization of the linear regression model by comparing under-parameterized and over-parameterized regimes in the asymptotic setting. Numerical experiments are provided that matches the asymptotic theory.
Strengths: 1. Precise analysis of how replacing sensitive features with cluster center value affect the generalization of the linear regression model.
2. Expression of risk in the under-parameterized and over-parameterized regimes and analysis of scenarios when the proposed method improves generalization by achieving lower risk while ensuring privacy.
Weaknesses: 1. While the introduction suggests that the developed theory clearly demonstrates the role of cluster size and number of clusters in controlling generalization error none of the theory statements presented in the main paper explicitly shows that dependence. While looking at the supplementary material, it is clear that after simple algebraic manipulation such effect can be seen (e.g., in part (b) of Theorem 3.3) it may be a better idea to reorganize the theory statement where such dependence is clearly visible.
2. In theorem 3.1 what is $\rho$? While later at somewhere else it is mentioned that it captures the alignment of the model with the left singular vectors of the cluster centers can it be interpreted as $U^T_s\theta_{0,s}=\sqrt{\rho}\theta_{0,s}$ which yields $|| U^T_s\theta_{0,s}||=\sqrt{\rho}r_s$?
3. For case 1 in section 5, $\psi_d=0.9$. If we look at figure 3(a), among various values of $\psi_p$, $\psi_p=0.9$ tolerates largest SNR while still ensuring $\Delta>1 (\log (\Delta) >0)$. However, since $\psi_d=0.9, \psi_p=0.9$ means all the features are sensitive and yet it tolerates the largest SNR and improves generalization. Does that mean, irrespective of sensitiveness, it is always a good strategy to replace all features by their cluster centers to achieve better generalization?
4. If we compare Figure 3(a) and 3(b) and look at what SNR values the red curve hits zero for $\rho=0.3$ and $\psi_p=0.5$, we get very different values. On Fig 3(a), this SNR value is a little less than 2 while on Fig 3(b), this SNR value is close to 2.5. Since both axes in Fig 3(a), 3(b) are identical, why is this discrepancy?
5. In figure 2, the six graphs are organized in a 2x3 window without clear “left” and “right” panels as mentioned in the figure caption. Reorganizing them in 2x3 window will have two panels and will match the figure caption.
6. Few typos in the statement of Proposition 3.
a. $\tilde{X}^T$ should be $\tilde{X}^T_L$
b. It seems $\delta_n$ and $\delta$ represent same thing (cluster estimation error rate). This needs to be fixed.
7. In line 200, “underparameterized” should be “overparameterized”.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. See weakness.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations are not clearly mentioned. To establish concrete expression of the risk, Assumption 2 is very simplistic or the isotropic Gaussians in the GMM is very simplistic. It may be a good idea to add a specific section/subsection mentioning the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and review our paper!
## General comments on weaknesses:
1) We respectfully disagree with your comment. Our theory indeed captures the effect of cluster size and the number of clusters as well as other factors precisely in the studied asymptotic regime. However, an intriguing corollary of our theory is that in the so-called underparamterized regime, those two factors (number of size of clusters) do not impact the generalization (cf. Theorem 3.1), while in the overparamterized regime they do (cf Theorem 3.2 and 3.3). Maybe that is the source of your confusion. We have highlighted this point in lines 106-111 and 191-192.
2) Definition of $\rho$ is just stated in the theorem itself (see line 185). Please recheck.
3) Not necessarily. Recall that $r_s := ||\theta_{0,s}||$ is the norm of model components restricted to sensitive features. So if you continue converting nonsensitive features to the sensitive ones, $r_s$ and so the SNR goes up. In other words, when it gets to comparing gain, you will be comparing different curves at different SNRs.
4) It seems that you are comparing wrong curves. Note that $\rho = 0.3$, $\psi = 0.5$ corresponds to the **green** curve in Fig 3(a) and to the **red** curve in Fig 3(b).
5) Thanks for raising it. It happened as we were dealing with the space constraint. We will fix the arrangement of panels.
6,7) Thanks! We will fix these typos.
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying some of my questions and confusions. Reading the reviews posted by other reviewers and the related discussion, it seems that I misjudged the merit of this paper. Therefore, I raise my score to 6. | Summary: This work presents a generalization analysis for look-alike clustering. In this type of clustering, the features in a model are divided into two groups: sensitive and non-sensitive, and the values of the sensitive features are replaced by the mean of the cluster to ensure K-anonymity (if the size of the cluster is K). They provide an analysis of generalization in the under-parameterized and over-parametrized regimes.
Strengths: The paper brings in generalization analysis for look-alike clustering, i.e., clustering under the assumptions of some features being held fixed at their mean. They provide some simulations as well, showing the under-parametrized and over-parametrized regimes.
Weaknesses: It would be good to provide more intuitive insights on how different factors affect the generalization, e.g., the number of sensitive and nonsensitive features. My intuition is that as the number of sensitive features increases, the generalization should get worse until it fails as they approach the total number of features?
I do not understand this assumption: "We focus on an asymptotic regime where the size of the training set grows in proportion to the features dimension." For a complete picture, it is best to keep them as separate variables, and if required, consider a substitution at the end in a corollary or something. This assumption takes away important insights related to the relationship between number of features and size of training set.
Experiments are mostly on synthetic data but it may be okay since the focus is mostly theoretical.
Clustering assumptions are based on linear models.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Clarify what happens if number of features are not equal to number of samples?
How do number of sensitive features affect generalization?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and review our paper!
## General comments on weaknesses:
1) The effect of the number of sensitive features on the generalization is indeed more complicated. For example, in the underparamterized regime as shown in Theorem 3.1, if we fix $n,d$ and increase $p$ (so $\psi_d = d/n$ is fixed and $\psi_p = p/n$ increases) the denominator in the risk increases, so does $r_s$ in the numerator (recall that $r_s = ||\theta_{0,s}||$ represents the norm of model component restricted to sensitive features). In addition, $\rho$ can also change as it measures the correlation between $\theta_{0,s}$ (the model component over sensitive features) and the cluster centers. The overall effect on the risk depends on the interplay between these factors. The virtue of our precise theory is to shed light on such intriguing effects. The reason that your intuition does not give the complete picture is at core due to a bias-variance tradeoff. As we argue after Theorem 5.1, look-alike modeling acts as a regularizer. As we increase the number of sensitive features, the impact of such regularization becomes stronger. It induces bias but also decreases the variance of the estimators. Depending on the interplay between them, we can have a positive or negative impact on the generalization.
2) You have a mis-understanding about the proportion regime. It says that the sample size and the features dimension are “proportional”, i.e. they are of the same “order” as the sample size grows to infinity (mathematically $d/n \to \psi_d$ and $p/n \to\psi_p$ as $n\to\infty$ for arbitrary bounded constants $\psi_d$ and $\psi_p$).Therefore, $n$, $d$, $p$ are separate variables (e.g., d = 1000n or p = 100 n, and these constants can be arbitrary).
This is in contrast to the “population regime” where the sample size is “orderwise” larger than the features dimension (i., n/d \to \infty). The population regime is of course much easier to analyze, however it does not capture many of the interesting phenomena happening in the proportional regime. Moreover, as discussed in the paper, with rise of overparamterized regime, the proportional regime is more relevant in practice.
3) Since our goal have been on deriving a precise characterization of generalization performance of look-alike modling, we devoted our numerical experiments to synthetic data to corroborate our theory and show the perfect match between simulations and the proposed theory.
4) Regarding the linear model and the limitations, please see our **global response** to all reviewers where we supplement an experiment with a non-linear data model and empirically show that a similar phenomena (better generalization of the look-alike modeling at low SNR) applies to this setting as well.
## Response to questions:
1) Please see our response above to it. We do not make such assumption. The proportional regime assumes that the number of features and the number of samples are at the same scale (their ratio can be any arbitrary bounded constant.)
2) Please see our response above (item #1 in weaknesses).
---
Rebuttal Comment 1.1:
Comment: Increasing my score by 1 (5 to 6)
My understanding of the assumption was correct, but what I was asking was why this is a good assumption.
In the final version, please include a remark discussing this assumption. It would be good to also find other theory papers where this proportional regime has been considered.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments and also for raising your score. We will include a remark in the final version to further explain this asymptotic regime, its relevance as well as several additional work which has considered this proportional regime. | Summary: This paper extends look-alike clustering to anonymize sensitive features of data points by replacing them with cluster means. It provides a theoretical analysis of the generalization error of models (linear regression estimator) trained using the anonymized sensitive features in the asymptotic regime. The paper also shows that anonymizing sensitive features through look-alike clustering can improve model generalization by acting as a regularization and mitigating overfitting when the signal-to-noise ratio (ratio of the energy of the model on sensitive features to the response noise) is below a certain threshold under different parametrization regimes.
Strengths: 1. The paper studies an interesting area of anonymizing private/sensitive user information, drawing on recent research that examines the memorization of training data samples.
2. The theoretical analysis presented in the paper is intriguing as it explores scenarios where learning from look-alike clustering-based anonymized data can potentially improve model generalization.
3. The paper is well-written with the majority of sections being clear and coherent.
Weaknesses: 1. The current draft lacks coverage of related works on other data anonymization methods. It would be helpful for the readers if the authors included a literature survey in either the main paper or the appendix.
2. The paper lacks comprehensive empirical experiments on real-world datasets (of any scale).
3. Limitations of the theoretical analysis presented in the paper are also missing.
4. Perturbation analysis about the upper bound of the look-alike estimator's risk (Proposition 3.4) is not covered in the over-parametrized regime.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Please refer to the comments in the weaknesses section.
2. Besides the generalization bounds, do the authors have any insights about the DP guarantees of their proposed look-alike anonymization approach?
Minor recommendations:
1. The authors should specify that proposition 3.4 holds true for the look-alike estimator in the under-parametrized regime.
Reference for k-anonymity :
1. Sweeney, Latanya. "k-anonymity: A model for protecting privacy." International journal of uncertainty, fuzziness and knowledge-based systems 10.05 (2002): 557-570.
Related works covering memorization of sensitive information in neural networks and can provide valuable insights into the problem motivation:
1. Song, Congzheng, and Vitaly Shmatikov. "Overlearning reveals sensitive attributes." arXiv preprint arXiv:1905.11742 (2019).
2. Feldman, Vitaly, and Chiyuan Zhang. "What neural networks memorize and why: Discovering the long tail via influence estimation." Advances in Neural Information Processing Systems 33 (2020): 2881-2891.
3. Stephenson, Cory, et al. "On the geometry of generalization and memorization in deep neural networks." arXiv preprint arXiv:2105.14602 (2021).
4. Malekzadeh, Mohammad, Anastasia Borovykh, and Deniz Gündüz. "Honest-but-curious nets: Sensitive attributes of private inputs can be secretly coded into the classifiers' outputs." Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. 2021.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I could not find a section addressing the limitations and negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and review our paper!
## General comments on weaknesses:
1) We will add related work on other data anonymization methods, including differential privacy, k-anonymity and aggregate learning.
2) Since the focus has been developing on “precise characterization” of the effect of look-alike modeling on generalization, we devoted our numerical sections on thorough synthetic simulations to corroborate our theory in various data regimes. We will be happy to add a real-data experiment to show the high-level idea that look-alike modeling can improve generalization.
3) A limitation of the presented analysis is that it is focused on linear regression. Despite its simplicity though, it is rich enough to show the surprising phenomenon that look-alike clustering can reduce overfitting. Also as you see deriving a precise characterization of generalization, in the regime that sample size and features dimension grow in proportion is already challenging. That said, we expect the same phenomenon carries over to a broader problem setting.
4) The upper bound on the perturbation becomes vacuous unless $\psi_d-\psi_p \ge 0.5$. Recall that $\psi_d-\psi_p = (d-p)/n$. Still we can have $\psi_p = p/n$ and $\psi_d = d/n$ both be higher than one, but it needs the number of non-sensitive features $(d-p)$ be smaller than $n$. The assumption arises in the analysis as we need to compare the spectrum of the look-alike features and the spectrum of the raw individual features and the randomness of the non-sensitive features can lead to large deviations in the eigenvalues. However, please note that this proposition only makes an assumption on the perturbation norm, and NOT any specific estimation of clusters (so perturbations can be added in an adversarial way as far as it satisfies the norm constraint).
## Response to questions:
1) Please see our responses to weaknesses.
2) It is a very interesting question. In the current form the look-alike modeling is not DP, unless some sort of randomization (noise) is applied to the look-alike features. In ongoing work we are investigating the right way of doing it. The idea is to split the privacy budget, part of it to learn clusters privately and part of it to randomize the look-alike features. The intuition is that if the cluster sizes are large enough, then this approach leads to a better privacy/generalization tradeoff compared to making the individual features DP from the onset. But it is an ongoing work and out of the scope of the current one.
Minor comments: Many thanks for the pointers. We will add/discuss them in the revised version. | Rebuttal 1:
Rebuttal: We would like to sincerely thank all the reviewers for taking time to review our paper and for the valuable feedbacks. A common comment raised by some of the reviewers was on the limitation of the work and whether the message of our paper goes beyond linear regression.
We expect the virtue of look-alike modeling on generalization to apply in a broad range of settings. The high-level intuition is that the look-alike modeling acts as a regularizer and can improve the generalization by reducing the variance, an observation that is not limited just to linear regression. To support it empirically, here we provide additional experiments for a nonlinear data model. We consider the setting where the response is generated as $y = \exp(X\theta_0 + \varepsilon)$, with $\varepsilon$ Gaussian noise, and the estimators are obtained by fitting a generalized linear model with logarithm link function and poisson distribution. As the plot in the attached pdf shows, at smaller SNR we observe a gain in the generalization of the look-alike estimator over the vanilla estimator which uses individual sensitive features (similar behavior to Fig 3a in the paper).
In terms of theory, a similar approach based on CGMT can be used to handle other loss functions, albeit leading to a much more complicated formula for the generalization, which is less transparent in showing the message. Nonetheless, a limitation of our analysis is the distribution of features, as the CGMT strongly relies on the gaussianity of the features.
Pdf: /pdf/a1bded7423b09034eb3bef1e7b403f7ffe7bcd37.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper studies linear regression where some coordinates of the covariate x are not revealed directly to the learner, and only a cluster-wise average is revealed for those coordinates. This change in the covariates x is called look-alike clustering and it has been used for protecting sensitive attributes in prior work. While it seems that with look-alike clustering, less information is provided to the learner, the paper shows that in certain regimes the performance of linear regression improves after look-alike clustering. As the authors explain, the reason behind this surprising phenomenon is that look-alike clustering can reduce overfitting.
More concretely, this paper assumes that the data points (x_i, y_i) are generated iid from a linear model y_i = < x_i, theta > + eps_i with mean-zero Gaussian noise eps_i. The distribution of the covariate x is a mixture of k Gaussian distributions each with its covariance matrix being the identity matrix. Look-alike clustering then corresponds to replacing a subset of the coordinates of each x_i by the coordinates of the mean of the Gaussian distribution that generates x_i. Those coordinates are called sensitive features.
For d dimensional x with p sensitive features, the authors study the asymptotic performance of linear regression with n data points as d, p, n all tending to infinity and the ratios d/n, p/n tending to constants. The number of clusters, k, is fixed, and performance is measured using the (population) mean squared error. The learner estimates the parameter theta in the linear model by solving least squares on the data points. When there are multiple solutions (e.g. when n < d) the learner chooses the one with the minimum l_2 norm $\\|\theta\\|_2$.
The authors show closed-form formulas for the asymptotic performance with and without look-alike clustering. This allows the authors to identify regimes when look-alike clustering improves the performance. They also included experimental results verifying the formulas. While the formulas are only shown to hold asymptotically, even when d, p, n are only a few hundreds the experimental results already closely match the formulas. The authors also extend the formulas to the case where we do not know the means of the k Gaussians when performing look-alike clustering and we instead use estimates for these means.
Strengths: The authors chose an interesting problem to study: the effect of look-alike clustering on generalization. The exact formulas on the performance of linear regression established in this paper are technically challenging to obtain, for which the authors use tools such as the Convex Gaussian Minimax Theorem. The paper is well written. The proofs in the supplementary are easy to follow and they look correct and complete.
Weaknesses: 1. Unless I am missing some parts of the paper, the authors do not provide any (empirical) examples beyond linear regression where look-alike clustering is helpful for performance. It would be great to have such examples as they may suggest that the analysis in this paper could explain a general phenomenon appearing not just in linear regression. Currently, the impact of the paper is somewhat limited to the linear regression setting.
2. In proposition 3.4 where look-alike clustering uses estimated means of the Gaussians, a requirement seems to be that we can accurately estimate the Gaussian mean of every data point. When we do not know which cluster each point comes from, this seems quite challenging especially when the Gaussians are not well separated. It is important for the authors to provide a clear explanation on how to obtain such accurate estimates (perhaps using prior results on learning Gaussian mixtures).
3. Starting at Line 297, the authors provide an explanation for the improved generalization from look-alike clustering. Indeed, with the main results being presented using math formulas, a high-level explanation is much needed. However, I am not able to fully understand the authors' explanations at Line 297. The authors wrote that look-alike clustering drops the <z_s, theta_s> term, but my understanding is that the y in look-alike clustering is generated using the original x and thus still contains the <z_s, theta_s> term, which means that the authors' description is inaccurate. It is also unclear how the authors' explanation is related to regularization.
Here is my own understanding of why look-alike clustering can improve generalization. The authors can comment on whether this makes sense. After look-alike clustering, each x_i, when restricted to the sensitive features, can only be chosen from k possible vectors, and thus the learned parameter theta, when restricted to the sensitive features, must be in the space spanned by the k vectors (since the learner picks the minimum norm theta). This low-dimension restriction is a form of regularization and can reduce overfitting. This does not explain why lower SNR leads to larger performance gain, which I feel is not explained well in the authors' explanation either.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. The explanation at Line 297 seems to be presented only for Case 2, but I don't see why this should be the case.
2. It seems that the assumed upper bound on delta at Line 237 could be negative, in which case the assumption cannot be satisfied. I would appreciate some comments on this.
3. It would be great if the authors can provide their feedback on the weaknesses session of the review.
Typos:
- Line 29: leaner -> learner
- Line 30: that that -> that
- Line 133: belong -> belongs
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately stated all the assumptions needed for obtaining their results. In terms of negative social impact, perhaps one question is whether look-alike clustering may lead to stereotyping each cluster (e.g. when the clusters are demographic groups). It would be great if the authors could discuss this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and review our paper!
## General comments on weaknesses:
1) We expect that the virtue of look-alike modeling on generalization to apply in a broad range of settings. The high-level intuition is that the look-alike modeling acts as a regularizer and can improve the generalization by reducing the variance, an observation that is not limited to linear regression. To support it empirically, in our **global response** we provide additional experiments for a nonlinear model where we observe similar phenomena.
2) We would like to emphasize that Proposition 3.4 **does not make such an assumption**. Note that the proposition statement involves two matrices $M$ and $\widetilde{M}$. The matrix $M$ corresponds to the matrix with true cluster means as it is columns, and $\widetilde{M}$ is the estimated cluster means. Similarly, $\Lambda$ is the matrix which encodes true cluster memberships, while $\widetilde{\Lambda}$ is the estimated one. Our proposition allows both estimation $\widetilde{M}$ and $\widetilde{\Lambda}$ be different from the true ones.
3) Let us make a clarification here. There are two components of the problem setting: (i) data generative model (linear regression and Gaussian Mixture model on the features), and (ii) trained model (Look-alike). The latter is how the learner fits a model to predict $y$ from the features. The response $y$ of course is based on individual features as you said, but the look-alike fits a model by regressing the response $y$ against ($\mu_s$,$x_{{\rm ns}}$) as opposed to ($x_s$, $x_{{\rm ns}}$) (which is done in the vanilla regression). This is what we mean by “look-alike clustering drops the $\langle z_s, \theta_s\rangle$ term”. In other words, it drops this term form the model it uses to regress against the response $y$. At low SNR, this corresponds to removing a `noise’ component from the model ($\langle z_s,\theta_s\rangle$ is of order of the noise $\varepsilon$) and avoids overfitting to this component. We will rewrite this part in the revision to ensure clarity and also include your explanation as well.
## Response to questions:
1) The explanation applies to other cases as well (e.g., in case 1, Fig3 we also see higher gain at lower SNR). We initially thought to put the explanation right after the theorem statement, but now we understand the confusion it made and will move it to the beginning of the section.
2) Line 237: The upperbound on the perturbation is non-negative only if $\psi_d-\psi_p \le 0.5$. Recall that $\psi_d - \psi_p = (d-p)/n$ is the ratio of non-sensitive features on the sample size. For $\psi_d-\psi_p\ge 0.5$ the statement is vacuous as the assumption is not satisfied. However, note that in this regime (or its extreme where n<< d-p) it is hopeless to have a reasonable estimate of the clusters as we have substantially smaller sample size than the problem dimension.
That said, please note that this proposition only makes an assumption on the perturbation norm, and NOT any specific estimator of the clusters (so perturbations can be added in an adversarial way as far as it satisfies the norm constraint). In other words, the result holds against strong adversarial perturbation. If one instead focuses on specific estimator of the clusters, then the bound on $\delta$ can probably be made more relaxed.
3) Please see our responses to the weaknesses.
Thanks for pointing out the typos. We will fix them.
---
Rebuttal Comment 1.1:
Title: Follow-up questions
Comment: I thank the authors for their careful response.
I think the added experiment is helpful for motivating the problem and demonstrating its impact, and I wonder if the importance of the work can be further highlighted. A strength of the paper is that it gives exact formulas for the expected squared loss, whereas a weakness is that it does not explain well what insight these quantitative formulas give in addition to a qualitative high-level explanation: 1) look-alike clustering reduces the model capacity--> regularization --> generalization (substantially reduced variance) and 2) low SNR implies small loss caused by the reduced model capacity (only slightly increased bias). The qualitative explanation already seems quite convincing for linear regression. For comparison, the double descent phenomenon and the long-tail theory seem very hard to explain convincingly (even in hindsight) without resorting to quantitative analysis.
The authors' response on the dropped term $\\langle z_s,\theta_s \\rangle$ makes sense. I think it is important to make the distinction between the training process and the data-generating process (data model (2.1)) clearer in the next version. The term $\\langle z_s,\theta_s \\rangle$ is currently presented in the description of the data-generating process, but it is only meant to be dropped in the training process (thus restricting the capacity of the learned model, which corresponds to regularization).
Here are some follow-up questions:
**Purtabation assumption in proposition 3.4.** I understand that the assumption on $\\widetilde M_s$ and $\\widetilde \\Lambda$ is only about their product and some amount of error $\\delta_n$ is allowed. My question was whether the authors know of an efficient algorithm that takes the original data set (before look-alike clustering) as input and produces such estimates $\\widetilde M_s$ and $\\widetilde \\Lambda$ that satisfy the assumption.
**Over-parameterized version of proposition 3.4.** Based on the authors' response, proposition 3.4 only applies to the significantly under-parameterized regime $\\psi_d - \\psi_p \le 0.5$. Is there a fundamental barrier to extending it to the over-parameterized regime? In the over-parameterized regime, $\\psi_p$ can still be very small, so we do not seem to be restricted by the data set size for the purpose of recovering the cluster means for the sensitive features.
**Explanation of improved generalization.** Does the explanation at Line 297 apply to Case 3? Why or why not?
---
Reply to Comment 1.1.1:
Comment: Thanks for your comments. We will use the added experiment in the revised version to argue that the use of look-alike modeling in generalization goes beyond linear regression. As you said we have qualitative explanation to justify the insights we get from quantitative analysis, in the hindsight. A precise quantitative theory is more helpful in understanding thresholds (like how small should SNR be to see positive gain) and the key factors on the generalization (eg., the fact that correlation between cluster centers and the model $\theta_0$ is a key factor or how the size/number of clusters effect generalization is only understood through a precise quantitative theory).
# Response to follow-up questions:
1) There are many efficient algorithms proposed to learn cluster membership under a Gaussian Mixture model, including semidefinite programs, Lloyd’s Algorithm (a greedy iterative method to approximate K-means), spectral clustering, tensor decompositions, method of moments, among others. In particular (Loffler et.al) shows optimality of spectral method (in terms of sample complexity) for GMM. We have not worked out their derived sample complexity in terms of our specific error measure $||M\Lambda - \tilde{M}\tilde{\Lambda}||$, as it has not been the focus of the paper, but we expect spectral clustering should satisfy the assumption when the cluster centers separation is large enough. (The analysis of (Loffler et.al) allows sample size and features dimension to be of the same order and shows that the fraction of misclustered points goes down exponentially fast in the cluster centers separation. Then we can focus on the correctly clustered ones and use a crude bound for the rest. On the correct ones $\Lambda$ and $\tilde{\Lambda}$ become the same and we just need to bound the distance between $M$ and $\tilde{M}$ which is same as bounding the deviation of sample average of points from the actual average.)
*Loffler, Zhang, Zhou, Optimality of spectral clustering for Gaussian mixture model, Annals of Statistics, 2021.
2) After the rebuttal period, we scrutinized the proof and realized that we can have a similar result for the overparameterized regime, where the assumption in this case reads as $\delta_n\le c(\sqrt{\psi_d-\psi_p-1} - 1)$. So the only part that is not covered is when $0.5< \psi_d-\psi_p<2$. The fact that $\psi_d-\psi_p$ should be a way from 1 is something fundamental, because otherwise the look-alike features will have very small (non-zero) singular value and the adversary can put all the perturbation in that space. Since the ridge-less estimator depends on the pseudo-inverse of the features matrix, this lead to a large perturbation on the estimator. But requiring $\psi_d-\psi_p$ to be outside (0.5,2), we believe is an artifact of the analysis and the result of [28] that we used in our proof.
3) Yes, it applies to case 3 as well. Note that explanation (based on regularization act of look-alike modeling) and avoiding overfitting in low-SNR does not assume anything about under/over-paramterized regime. Indeed, we had a figure similar to Fig 3(a) for case 3, which we removed due to space constraint. (Similar to Fig 3(a), it shows a positive gain at low SNR.) | null | null | null | null | null | null |
Towards Semi-Structured Automatic ICD Coding via Tree-based Contrastive Learning | Accept (poster) | Summary: This paper addresses automating diagnosis coding via tree-based contrastive learning. It uses an established benchmarking dataset for this task, and achieves insightful results from its comparative performance evaluations and ablation studies.
Strengths: This paper addresses automating diagnosis coding via tree-based contrastive learning. It uses an established benchmarking dataset for this task, and achieves insightful results from its comparative performance evaluations and ablation studies. It has a well chosen range of machine learning methods to compare in the coding task, and its methods are very clearly and thoroughly described.
Weaknesses: The paper is somewhat lacking in its qualitative analysis. It would be helpful to extend Section 5 with a mixed method approach to analyse and evaluate the experimental methods also using qualitative approaches.
The paper is also calling for a discussion section to deepen the lessons learnt part of the study. For example, what are the limitations of the study? What are the envisioned pros and cons of the studied methods in their broader context in health and medicine? What are the ethical considerations related to using the MIMIC -III dataset?
It would have been helpful to connect this natural language processing paper to prior shared tasks and their shared datasets to allow seeing trends in methods. E.g., Computational Medicine Center's 2007 Medical NLP Challenge, followed by those by I2B2, N2C2, and CLEF eHealth would be worth of briefly surveying to assure that methods are compared to more traditional ones as well.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See my broader impact and ethical consideration comments above for minor comments
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Summary
We truly appreciate your suggestions. We understand your concerns are from multiple perspectives, and we try our best to answer them in this discussion. We sincerely hope our answers can address your concerns.
---
### Weakness 1: The paper is somewhat lacking in its qualitative analysis. It would be helpful to extend Section 5 with a mixed method approach to analyse and evaluate the experimental methods also using qualitative approaches.
**A:** Thank you for this suggestion. We agree that a qualitative analysis is helpful to intuitively show the effectiveness of the proposed method. In Section 5 (Experiments), we have a case study (Section 5.6) about how the proposed method can better connect labels and clinical text. Also, in Appendix B.3, Extracted section titles, we have a qualitative analysis of the extracted section titles.
We hope these can be parts of the qualitative analysis. If not, we would appreciate it if the reviewer could provide more details on the specific approaches for qualitative analysis.
---
### Weakness 2 + Limitation: The paper is also calling for a discussion section to deepen the lessons learnt part of the study. For example, what are the limitations of the study? What are the envisioned pros and cons of the studied methods in their broader context in health and medicine? What are the ethical considerations related to using the MIMIC-III dataset? + See my broader impact and ethical consideration comments above for minor comments
**A:** For limitations, currently we discuss them in the last paragraph of Section 6, Conclusion. But as the Reviewer ijoF suggested, we will provide a separate section to discuss more details as follows:
> ### Limitations
>
> Although the proposed training strategies are able to enhance existing ICD coding models, they are dependent on the design of these models. If the model is well-designed and has many parameters, it is generally over-fitting with limited training data. In this case, our proposed training strategies are a good enhancement. Additionally, we only focus on the variability caused by the order of sections in this work, but there are other formats of variability such as typos and synonyms. In the future, we plan to design new ICD coding models based on sections and consider more types of variability to further improve the robustness of the training process.
For broader impact and ethical considerations, we currently have an independent section in Appendix C, Broader Impacts. In that section, we discuss the ethical considerations of using MIMIC-III. We will consider moving them into the main paper in the future version.
> ### Broader Impacts
> **Ethical considerations** While EHR data contains private information of patients, the MIMIC-III dataset used in this work as well as all backbone models is a publicly available dataset. It de-identified the sensitive information of patients and doctors with masks, including admission/discharge date, name, and hospital name (e.g., [\*\*first name3\*\*]) to protect privacy. Therefore, the data we used will not leak such information even if we publish our code and model parameters.
>
> **Societal Impacts** Incorrect ICD coding can lead to medical billing errors which can affect patients and healthcare costs. However, as an enhancement of existing ICD coding models, our work aims to improve the prediction accuracy of ICD coding. We believe our method does not bring additional negative societal impacts to ICD coding.
---
### Weakness 3: It would have been helpful to connect this natural language processing paper to prior shared tasks and their shared datasets to allow seeing trends in methods. E.g., Computational Medicine Center's 2007 Medical NLP Challenge, followed by those by I2B2, N2C2, and CLEF eHealth would be worth of briefly surveying to assure that methods are compared to more traditional ones as well.
**A:** Thank you for this suggestion. We understand it will be good to incorporate traditional datasets and models. Currently, we focus on following the routine of the recently published backbone models used in this paper. Since they mainly use the MIMIC dataset, we think it is fair to compare the performance on the MIMIC dataset in this work. But we will definitely explore the I2B2, N2C2, and CLEF eHealth datasets in future work.
---
Rebuttal Comment 1.1:
Title: Rebuttal response
Comment: Based on the clear and convincing response by the authors, I have revised my review scoring.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your recognition and valuable suggestions in this review! | Summary: This paper describes a novel method of ICD coding that explicitly model clinical note sections. Instead of treating a clinical note as a long sequence of tokens, the authors propose to segment a clinical note into sections and then use contrasive learning to pre-train the model.
Experiment results on MIMIC-III show that the proposed components can be used to improve the effectiveness of several existing CNN, RNN, Transformer-based models, especially when the training data is limited.
Strengths: * A simple yet effective contrastive learning variant based on label tree
* The proposed (section segmentation and contrastive learning) components are used with several existing models and are shown to be effective, especially with limited training data.
Weaknesses: * The used baselines are mainly CNN, RNN based. The only used transformer-based baseline PLM-ICD seems not to be a strong one. See suggestion 3
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Suggest swapping equations (1) and (2)
* Section 4.3: can you explain what is the role of perm operation (or saying, why it is necessary)?
* Suggest to consider stronger transformer-based baselines [1]
[1] Xiang Dai, Ilias Chalkidis, Sune Darkner, Desmond Elliott, "Revisiting Transformer-based Models for Long Document Classification", in Findings of EMNLP, 2022.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Summary
We are glad to know you think our work is effective. We truly appreciate your suggestions. We believe your concerns are mainly due to the baseline selection, especially for Transformer-based models. We carefully read your suggested paper and add the result of a stronger Transformer-based model. We sincerely hope it can adequately address your concerns.
---
### Weakness + Question 3: The used baselines are mainly CNN, RNN based. The only used transformer-based baseline PLM-ICD seems not to be a strong one. See suggestion 3 + Suggest to consider stronger transformer-based baselines:
[1] Xiang Dai, Ilias Chalkidis, Sune Darkner, Desmond Elliott, "Revisiting Transformer-based Models for Long Document Classification", in Findings of EMNLP, 2022.
**A:** Thank you for these suggestions. We first list Table 2 in that paper for your reference (C: CNN, T: Transformer, R: RNN, L: Roberta-large). It is worth noting that the RNN-based MSMN model (one of the baselines used in our paper) has the best performance even compared to various Transformer-based models. This indicates that we already use strong baselines.
| Model | Model Type | Macro AUC | Micro AUC | Macro F1 | Micro F1 | P@5 |
|----------------|-------------|-----------|-----------|----------|----------|------|
| CAML | C | 88.4 | 91.6 | 57.6 | 63.3 | 61.8 |
| PubMedBERT | T | 88.6 | 90.8 | 63.3 | 68.1 | 64.4 |
| GatedCNN-NCI | C | 91.5 | 93.8 | 62.9 | 68.6 | 65.3 |
| LAAT | R | 92.5 | 94.6 | 66.6 | 71.5 | 67.5 |
| **MSMN** | R | **92.8** | **94.7** | **68.3** | **72.5** | **68.0** |
| Baselines processing up to 512 tokens | | | | |
| First | T | 83.0 | 86.0 | 47.0 | 56.1 | 55.4 |
| Random | T | 82.5 | 85.4 | 42.7 | 51.1 | 52.3 |
| Informative | T | 82.7 | 85.8 | 46.4 | 55.2 | 54.8 |
| Long document models | | | | | |
| Longformer (4096 + LWAN) | T | 90.0 | 92.6 | 60.7 | 68.2 | 64.8 |
| Hierarchical (4096 + LWAN) | T | 91.1 | 93.6 | 62.9 | 69.5 | 65.7 |
| Hierarchical (4096 + LWAN + L) | T | 91.7 | 94.1 | 65.2 | 71.0 | 66.2 |
| Hierarchical (4096 + LWAN) | T | 91.4 | 93.7 | 63.8 | 70.1 | 65.9 |
| Hierarchical (4096 + LWAN + L) | T | 91.9 | 94.1 | 65.5 | 71.1 | 66.4 |
We choose PLM-ICD because it is a recently published Transformer-based model (2022), which shows strong performance in the MIMIC-full setting. PLM-ICD splits the clinical notes into chunks. It can also be considered as a type of Hierarchical Transformer. However, since it uses Roberta-base instead of Roberta-large (L), the performance is similar to Hierarchical (4096 + LWAN) in this table.
To make a stronger comparison, besides PLM-ICD, we have added Hierarchical (4096 + LWAN + L) as another backbone model. Here, LWAN refers to “label-wise attention network”. We list the results as follows:
For MIMIC-50:
| Model | w/o CM | | | w/ CM | | | |
|--------------|-------------|-------------|------------|-------------|-------------|------------|-----------|
| | Macro-$F_1$ | Macro-$F_1$ | P@5 | Macro-$F_1$ | Macro-$F_1$ | P@5 | $p$-value |
| PLM-ICD | 64.5 (0.3) | 69.3 (0.2) | 64.5 (0.4) | 65.2 (0.1) | 70.3 (0.2) | 65.6 (0.2) | $2 \times 10^{-4}$ |
| Hierarchical | 65.3 (0.1) | 70.6 (0.3) | 66.5 (0.1) | 66.1 (0.2) | 71.8 (0.4) | 67.2 (0.3) | $4 \times 10^{-4}$ |
For MIMIC-rare-50:
| Model | w/o CM | | w/ CM | | |
|--------------|-------------|-------------|-------------|-------------|-----------|
| | Macro-$F_1$ | Macro-$F_1$ | Macro-$F_1$ | Macro-$F_1$ | $p$-value |
| PLM-ICD | 22.6 (2.5) | 24.3 (1.9) | 30.3 (1.5) | 29.5 (1.3) | $6 \times 10^{-4}$ |
| Hierarchical | 23.1 (1.7) | 24.6 (1.4) | 32.0 (1.2) | 31.3 (2.2) | $8 \times 10^{-5}$ |
In the table for MIMIC-50, the hierarchical results are slightly different from Table 2 above. We think it is because of different random initialization of different runs. Nevertheless, for the hierarchical transformer, we can still observe significant improvement with small $p$-values.
---
### Question 1: Suggest swapping equations (1) and (2)
**A:** Thank you for this suggestion. We will swap equations (1) and (2) and update expressions for better clarity.
---
### Question 2: Section 4.3: can you explain what is the role of perm operation (or saying, why it is necessary)?
**A:** We apologize for any confusion here. By using `perm` to get a random permutation of section indices, we want to generate a random shuffle of all sections in a clinical note. This shuffling is used as a denoising technique in the BART paper. We will clarify this in the future version.
---
Rebuttal Comment 1.1:
Comment: Thanks for providing additional results. I do not have other major concerns.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank your effort and valuable suggestions in this review! | Summary: The paper proposed a semi-structured automatic ICD coding algorithm with a contrastive pre-training and masked section training and evaluate the algorithm using MIMIC-III dataset.
Strengths: The paper is well structured with clear explanation in research motivation, related work, experiment configuration and results.
Empirical results were obtained with multiple baseline models on benchmark dataset MIMIC-III.
Code and data are also provided.
Weaknesses: 1. In the related work part, the paper misses the weak supervision approach applied on ICD coding.
2. Result tables (Table 1 and Table 2) and result plots (Figure 4) miss confidence intervals (CIs). Standard deviations are not intuitive to illustrate the variance for model comparison, especially only 5 repetitions are performed. Pls compute the CIs for each results, which should be straight forward.
3. The comparison of w/C and w/M in Figure 4 is interesting to illustrate the utility and necessity of section identification in your proposed framework. Better to provide a table with results and confidence intervals in main text or supplement. Otherwise, the performance difference looks very trivial.
In general, the model evaluation is weak in the current representation, which might eliminate the technical soundness.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Are the results cross validated? It is not specified in the main text or supplement. Pls justify the significance and generalizability of the proposed model.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes. Limitations are addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Summary
We truly appreciate your suggestions and understand your concerns mainly come from the result presentation. We have added the confidence intervals for Table 1 and Table 2. We have also added one representative table with confidence intervals of Figure 4. We sincerely hope these updates can adequately address your concerns.
---
### Weakness 1: In the related work part, the paper misses the weak supervision approach applied on ICD coding.
**A:** Thank you for this suggestion. We will add the following papers and corresponding discussions to the related work part:
- Dong, Hang, et al. "Rare disease identification from clinical notes with ontologies and weak supervision." 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2021.
- Cusick, Marika, et al. "Using weak supervision and deep learning to classify clinical notes for identification of current suicidal ideation." Journal of psychiatric research 136 (2021): 95-102.
- Gao, Chufan, et al. "Classifying unstructured clinical notes via automatic weak supervision." Machine Learning for Healthcare Conference. PMLR, 2022.
---
### Weakness 2: Result tables (Table 1 and Table 2) and result plots (Figure 4) miss confidence intervals (CIs). Standard deviations are not intuitive to illustrate the variance for model comparison, especially only 5 repetitions are performed. Pls compute the CIs for each results, which should be straight forward.
**A:** We appreciate this suggestion. In the following tables, we have added the confidence interval (95%) of the paired *t*-test for Table 1 and Table 2. Here, the confidence interval for the paired *t*-test denotes we have 95% confidence that the difference of the average Macro-$F_1$ score is in this interval. We believe that, together with $p$-value, the value of confidence intervals can be evidence that our method is a significant improvement over the backbone models.
Confidence Interval for Table 1 (MIMIC-50):
| Model | Macro-$F_1$ w/o CM | Macro-$F_1$ w/ CM | $p$-value | Confidence Interval |
|--------------|--------------------|-------------------|--------------------|---------------------|
| MultiResCNN | 60.8 (0.3) | 62.2 (0.3) | $1 \times 10^{-4}$ | [1.2, 2.3] |
| HyperCore | 61.1 (0.2) | 62.0 (0.2) | $5 \times 10^{-3}$ | [0.4, 1.4] |
| JointLAAT | 66.4 (0.1) | 67.2 (0.2) | $7 \times 10^{-3}$ | [0.4, 1.3] |
| EffectiveCAN | 66.7 (0.1) | 67.5 (0.2) | $3 \times 10^{-3}$ | [0.4, 0.9] |
| PLM-ICD | 64.5 (0.3) | 65.2 (0.1) | $2 \times 10^{-4}$ | [0.4, 0.7] |
| MSMN | 68.1 (0.2) | 69.1 (0.1) | $8 \times 10^{-4}$ | [0.7, 1.3] |
Confidence Interval for Table 2 (MIMIC-rare-50):
| Model | Macro-$F_1$ w/o CM | Macro-$F_1$ w/ CM | $p$-value | Confidence Interval |
|--------------|--------------------|-------------------|--------------------|---------------------|
| MultiResCNN | 11.2 (2.1) | 22.8 (1.3) | $5 \times 10^{-4}$ | [9.4, 16.0] |
| HyperCore | 12.5 (1.3) | 23.4 (1.9) | $3 \times 10^{-5}$ | [10.6, 13.7] |
| JointLAAT | 20.2 (1.9) | 28.6 (1.1) | $2 \times 10^{-4}$ | [8.3, 13.3] |
| EffectiveCAN | 19.8 (1.4) | 27.1 (2.4) | $1 \times 10^{-4}$ | [6.9, 10.7] |
| PLM-ICD | 22.6 (2.5) | 30.3 (1.5) | $6 \times 10^{-4}$ | [6.3, 11.0] |
| MSMN | 23.7 (1.0) | 31.2 (1.3) | $2 \times 10^{-5}$ | [8.7, 10.8] |
---
### Weakness 3: The comparison of w/C and w/M in Figure 4 is interesting to illustrate the utility and necessity of section identification in your proposed framework. Better to provide a table with results and confidence intervals in main text or supplement. Otherwise, the performance difference looks very trivial. In general, the model evaluation is weak in the current representation, which might eliminate the technical soundness.
**A:** We completely understand the importance of confidence intervals. Here, we have added confidence interval for the variants of MSMN in terms of Macro-$F_1$ score. Please understand it will be a large table if we list all variants of all models. Therefore, due to the space limit, we choose the confidence interval of MSMN variants as a representative. We will add the full table to the main paper and supplementary in the future version.
| Variant | Macro-$F_1$ of Variant | Confidence Interval for Macro-$F_1$ w/CM: 69.1 (0.1) |
|---------|------------------------|------------------------------|
| w/o CM | 68.1 (0.2) | [0.7, 1.3] |
| w/ J | 68.3 (0.1) | [0.6, 1.0] |
| w/ C | 68.7 (0.2) | [0.3, 0.8] |
| w/ M | 68.9 (0.1) | [0.2, 0.4] |
---
### Question: Are the results cross validated? It is not specified in the main text or supplement. Pls justify the significance and generalizability of the proposed model.
**A:** In the Appendix, Dataset Statistics (Table 3), we demonstrated that the dataset is split into training, dev (validation), and test data. We follow the dataset split settings in CAML [1] and KEPT [2], which are random split. Our experiments are conducted with cross-validation on the dev set to adjust hyper-parameters. We will clarify this in the main paper.
It is worth noting that we strictly follow the basic rules for training deep learning models in experiments. These rules include but are not limited to random dataset split, cross-validation, multiple runs with different random seeds, and comparing with strong baselines. We believe together with the suggested confidence intervals, these rules can ensure the significance and generalizability of the proposed model.
> [1] Mullenbach, James, et al. "Explainable Prediction of Medical Codes from Clinical Text." Proceedings of NAACL 2018.
>
> [2] Yang, Zhichao, et al. "Knowledge Injected Prompt Based Fine-tuning for Multi-label Few-shot ICD Coding." arXiv preprint arXiv:2210.03304 (2022).
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer VQpM,
We would like to thank you again for your reviews. We understand reviewing is a time-consuming process. Your feedback on our rebuttal is more than valuable in improving the quality of our paper. If there are any further concerns or questions, please feel free to let us know before the author discussion period ends. We will be happy to answer them during the discussion.
Thank you!
---
Rebuttal Comment 1.2:
Title: Thank you
Comment: Thank you for providing the answers to the questions as well as confidence intervals for the experiments. However, it seems that a two-side t-test is performed to check the difference in distributions, instead of a one-side t-test to prove that the performance is improved from one to the other. It's not enough to indicate the proposed method is better than baseline. Pls justify. Thank you!
---
Reply to Comment 1.2.1:
Comment: To justify, we further calculate one-sided (greater) paired *t*-test using scipy's `ttest_rel` function:
```python
result = ttest_rel(a, b, alternative='greater')
print('p-value:', result.pvalue)
print('confidence interval:', result.confidence_interval(confidence_level=0.95))
```
Here, `a` is the results of our method, and `b` is the results of the backbone methods. We demonstrate the new p-values and confidence intervals as follows:
Confidence Interval for Table 1:
| Model | Macro-$F_1$ w/o CM | Macro-$F_1$ w/ CM | $p$-value | Confidence Interval |
|--------------|--------------------|-------------------|--------------------|---------------------|
| MultiResCNN | 60.8 (0.3) | 62.2 (0.3) | $6 \times 10^{-5}$ | [1.3, $+\infty$] |
| HyperCore | 61.1 (0.2) | 62.0 (0.2) | $3 \times 10^{-3}$ | [0.5, $+\infty$] |
| JointLAAT | 66.4 (0.1) | 67.2 (0.2) | $4 \times 10^{-3}$ | [0.5, $+\infty$] |
| EffectiveCAN | 66.7 (0.1) | 67.5 (0.2) | $1 \times 10^{-3}$ | [0.5, $+\infty$] |
| PLM-ICD | 64.5 (0.3) | 65.2 (0.1) | $1 \times 10^{-4}$ | [0.4, $+\infty$] |
| MSMN | 68.1 (0.2) | 69.1 (0.1) | $4 \times 10^{-4}$ | [0.9, $+\infty$] |
Confidence Interval for Table 2:
| Model | Macro-$F_1$ w/o CM | Macro-$F_1$ w/ CM | $p$-value | Confidence Interval |
|--------------|--------------------|-------------------|--------------------|---------------------|
| MultiResCNN | 11.2 (2.1) | 22.8 (1.3) | $2 \times 10^{-4}$ | [9.8, $+\infty$] |
| HyperCore | 12.5 (1.3) | 23.4 (1.9) | $1 \times 10^{-5}$ | [11.6, $+\infty$] |
| JointLAAT | 20.2 (1.9) | 28.6 (1.1) | $9 \times 10^{-5}$ | [8.8, $+\infty$] |
| EffectiveCAN | 19.8 (1.4) | 27.1 (2.4) | $6 \times 10^{-5}$ | [7.2, $+\infty$] |
| PLM-ICD | 22.6 (2.5) | 30.3 (1.5) | $3 \times 10^{-4}$ | [6.8, $+\infty$] |
| MSMN | 23.7 (1.0) | 31.2 (1.3) | $1 \times 10^{-5}$ | [9.2, $+\infty$] |
Confidence Interval for Figure 4 (MSMN):
| Variant | Macro-$F_1$ of Variant | Confidence Interval for Macro-$F_1$ w/CM: 69.1 (0.1) |
|---------|------------------------|------------------------------|
| w/o CM | 68.1 (0.2) | [0.8, $+\infty$] |
| w/ J | 68.3 (0.1) | [0.6, $+\infty$] |
| w/ C | 68.7 (0.2) | [0.4, $+\infty$] |
| w/ M | 68.9 (0.1) | [0.2, $+\infty$] |
We hope the new results can adequately address your concerns. | Summary: The paper tackles automatic ICD coding. It lists challenges and proposes solutions to them:
1. For ignoring structural information, it proposes a content-based algorithm that automatically segments clinical notes into sections.
2. For limited availability of data and variability of clinical notes, it proposes a contrastive learning framework based on a soft multi-label similarity with tree edit distance and a masked section training strategy.
The authors conduct extensive experiments on MIMIC-III and demonstrate that their proposed methods can enhance the performance of existing ICD coding models.
Strengths: - The paper is well-written and easy to follow. The introduction, related work and preliminaries gives a clear background and motivation to the problem.
- Code is provided in the supplementary material.
- DF-IAPF is neat, easy to implement, and fast to run.
- DF-IAPF has two assumptions: 1) section titles have high document frequency; 2) section titles have low average phrase frequency. While there are corner uses where a section title only appears in some notes, or it appears multiple times in a note, the paper provides both qualitative and quantitive analysis that compare DF-IAPT with a rule-based algorithm and demonstrates the effectiveness of DF-IAPT.
- Soft multi-label similarity based on tree edit distance captures the hierarchy difference between two ICD label sets.
- The choices of tasks cover frequent, rare, and the entire ICD codes in MIMIC-III.
- Experiments verify the effectiveness of proposed methods on a diverse set of backbones.
- Ablation studies clearly show the improvement of each method proposed in this paper.
Weaknesses: An expert selection process is needed after DF-IAPF proposes a candidate set. How important is the expert selection process? There should be a comparison between DF-IAPF fully automatic and DF-IAPF followed by an expert selection.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Line 238: Is h_section more accurate than h_note?
- Typo in line 12 of Algorithm 1 in Appendix A: TF(t) -> PF(t)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: This paper includes a section in Appendix, discussing broader impacts. The conclusion section also briefly states the limitations of this work. It would be better to have a separate section to discuss these limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Summary
We are delighted to know that you think our work has multiple strengths. We understand that your primary concern is the role of medical experts in the title selection. To address this, we have added the comparison between the titles extracted by our algorithm and those selected by medical experts. The results demonstrate that the titles derived from our algorithm align closely with the titles chosen by experts. This evidence confirms that the selection process demands minimal effort from experts. We sincerely hope these results can adequately address your concerns.
---
### Weaknesses: An expert selection process is needed after DF-IAPF proposes a candidate set. How important is the expert selection process? There should be a comparison between DF-IAPF fully automatic and DF-IAPF followed by an expert selection.
**A:** We agree it is important to discuss the role of medical experts in candidate selection. Thank you for your suggestion. We list the originally extracted titles by our algorithm and the selected titles by medical experts as follows:
| Rank | Original Title | Rank | Selected Title By Medical Experts |
|------|------------------------------------- |------|--------------------------------------|
| 1 | history of present illness | 1 | history of present illness |
| **2** | **sex f** | 2 | sex |
| **3** | **sex m** | - | |
| 4 | date of birth | 3 | date of birth |
| 5 | discharge date | 4 | discharge date |
| 6 | admission date | 5 | admission date |
| 7 | social history | 6 | social history |
| 8 | past medical history | 7 | past medical history |
| 9 | discharge medications | 8 | discharge medications |
| 10 | medications on admission | 9 | medications on admission |
| 11 | discharge diagnosis | 10 | discharge diagnosis |
| 12 | discharge condition | 11 | discharge condition |
| 13 | discharge instructions | 12 | discharge instructions |
| 14 | major surgical or invasive procedure | 13 | major surgical or invasive procedure |
| 15 | brief hospital course | 14 | brief hospital course |
| 16 | pertinent results | 15 | pertinent results |
| 17 | followup instructions | 16 | followup instructions |
| 18 | family history | 17 | family history |
| 19 | chief complaint | 18 | chief complaint |
| 20 | attending | 19 | attending |
| 21 | physical exam | 20 | physical exam |
We can see medical experts only need to correct `sex m` and `sex f`. Since the extracted titles are mostly correct, there is actually little effort required by medical experts. Therefore, the role of medical experts in this process is to validate the extracted titles by the proposed DF-IAPF method, which further evaluates the effectiveness and accuracy of the DF-IAPF method.
To clarify, we will also add this table to Appendix in the future version.
---
### Question 1: Line 238: Is h_section more accurate than h_note?
**A:** Thank you for this suggestion! We used $h_{note}$ as a general symbol for output of $Enc_{note}$ . But here, we agree that $h_{sec}$ is more accurate. We will update this notation in the future version.
---
### Question 2: Typo in line 12 of Algorithm 1 in Appendix A: TF(t) -> PF(t)?
**A:** Thank you for this correction. We will fix this in the future version.
---
### Limitations: This paper includes a section in Appendix, discussing broader impacts. The conclusion section also briefly states the limitations of this work. It would be better to have a separate section to discuss these limitations.
**A:** Thank you for this suggestion! We agree that it will be better to add an independent section about limitations. We will add it in the future version, and it will look like this:
> ### Limitations
>
> Although the proposed training strategies are able to enhance existing ICD coding models, they are dependent on the design of these models. If the model is well-designed and has many parameters, it is generally over-fitting with limited training data. In this case, our proposed training strategies are a good enhancement. Additionally, we only focus on the variability caused by the order of sections in this work, but there are other formats of variability such as typos and synonyms. In the future, we plan to design new ICD coding models based on sections and consider more types of variability to further improve the robustness of the training process.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer ijoF,
We would like to thank you again for your feedback. If there are any further concerns or questions, please do not hesitate to let us know before the author discussion period ends. We will be happy to answer them during the discussion.
Thank you!
---
Rebuttal Comment 1.2:
Comment: Thanks for your responses. I raised my score to 7. I do not have other major concerns.
---
Reply to Comment 1.2.1:
Comment: We sincerely thank your great effort and valuable suggestions in this review! | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Low Tensor Rank Learning of Neural Dynamics | Accept (poster) | Summary: Proposes a low tensor rank recurrent neural network (ltrRNN) architecture, in which the tensor constructed by stacking the RNN weight matrices of different trials is constrained to have low tensor rank. Empirically shows that ltrRNNs can fit neural recordings during a motor learning task, achieving lower unexplained variance than baseline methods. Then, demonstrates that an RNN trained to perform the same motor learning task yields dynamics that can also be fit well with ltrRNNs (i.e. low tensor rank dynamics). Lastly, theoretically analyzes how gradient-based optimization can lead to low rank, establishing upper bounds on matrix and tensor ranks for RNNs.
Discloser: The research area of the current paper (interplay between machine learning and neuroscience) falls outside my expertise, and so it is difficult for me to assess the novelty and significance of some of the contributions. My review mainly focuses on presentation, soundness, and the theoretical analysis of gradient descent constraining tensor rank of weight updates.
Strengths: 1. Reads relatively well.
2. As far as I am aware, the technique proposed for modeling structure in a task learning process through low tensor rank of weights across different iterations is novel. I found the idea of examining such low rank structure insightful --- existing characterizations of (low rank) structure during gradient-based learning typically focus on weights/representations of a single iteration. I believe this concept may turn out useful for future study of implicit regularization of gradient descent.
3. Analyzing the dynamics of gradient-based optimization is a subject of significant interest in recent years. The bounds on the matrix and tensor ranks of the gradient and weights of a continuous RNN contribute to a line of works suggesting that gradient descent leads to low rank solutions (e.g. in matrix and tensor factorizations as well as non-linear networks [1, 2, 3, 4, 5]). With that said, the form of the RNN considered is unorthodox in terms of its update rule and in being continuous-time, which may limit the impact of these results.
[1] Li, Zhiyuan, Yuping Luo, and Kaifeng Lyu. "Towards resolving the implicit bias of gradient descent for matrix factorization: Greedy low-rank learning." arXiv preprint arXiv:2012.09839 (2020).
[2] Razin, Noam, Asaf Maman, and Nadav Cohen. "Implicit regularization in tensor factorization." International Conference on Machine Learning. PMLR, 2021.
[3] Razin, Noam, Asaf Maman, and Nadav Cohen. "Implicit regularization in hierarchical tensor factorization and deep convolutional neural networks." International Conference on Machine Learning. PMLR, 2022.
[4] Boursier, Etienne, Loucas Pillaud-Vivien, and Nicolas Flammarion. "Gradient flow dynamics of shallow relu networks for square loss and orthogonal inputs." Advances in Neural Information Processing Systems 35 (2022): 20105-20118.
[5] Timor, Nadav, Gal Vardi, and Ohad Shamir. "Implicit regularization towards rank minimization in relu networks." International Conference on Algorithmic Learning Theory. PMLR, 2023.
Weaknesses: I found the empirical evidence to be somewhat unsatisfactory since it only includes two datasets. Experiments on further tasks/datasets may greatly solidify the viability of ltrRNN. Such experiments can reveal whether the low tensor rank dynamics are specific to the type of neural recording data examined here or it is more general (and thus significant).
An additional (more minor) comment: Some of the terms used are non-standard in the machine learning literature. Since results such as those in Section 6 can be of interest to researchers not familiar with this terminology it may be best to clarify (e.g. in footnotes or an appendix) their meaning. For example, the terms “trial”, “task condition”, “chaotic regime”.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Have you applied the ltrRNN to other datasets? Or is the purpose of the architecture specific to the kind of data reported in the paper?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed possible limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback regarding the novelty and utility of our work.
**Q1. Relationship between continuous-time and discrete-time RNNs.**
**A.** We used a formulation of a continuous-time RNN that is commonly used in neuroscience applications of machine learning research [1-3], as these equations can be seen as approximations to more biologically motivated systems of ODEs describing neural circuit dynamics [4]. More generally, there has been a recent surge of interest in applications of ODEs to machine learning [5], as moving to the continuous domain enables the large body of literature on the mathematical theory behind ODEs and dynamical systems to be leveraged. For example, we exploited the adjoint state method to derive our analytical bounds on the ranks of the weight updates. In broad strokes, the adjoint method works by defining a new ("adjoint") dynamical system that is driven by the time-inverted dynamics of the original ODE (see Supplementary Materials D.1.1-2 for a precise mathematical treatment). However, this is not always possible for a discrete-time RNN ($\mathbf{x}_{t+1} = f(\mathbf{x}_t,\theta)$), as a non-invertible $f$ would mean that the dynamics of $\mathbf{x}_t$ cannot be time-inverted (as opposed to the continuous-time RNN, $\mathbf{\dot x} = f(\mathbf{x},\theta)$). Therefore, while we expect that an extension of our analyses from continuous to discrete-time RNNs is possible, it would be a non-trivial exercise.
Regarding the update rule, we note that while our bounds in the main text are based on gradient descent, they can be extended to Adam (line 777 of Supplementary Material D). In the revised version of the manuscript, we will point the reader to the relevant section of the Supplementary Material.
Finally, we thank the reviewer for pointing out these additional citations on implicit regularization, which we had missed. We will cite these in the discussion section.
**Q2. Unfamiliar terminology.**
**A.** We thank the reviewer for pointing out the neuroscience-specific jargon. We will modify the revised version with the following definitions: Typically, in a neuroscience experiment, the animal is required to perform the same task over many repetitions in order to quantify the variance of neural activity. Each of these repetitions is a *trial*. In addition, a typical task often has many "conditions" that change the target output of the task (for RNNs different condition corresponds to mapping a different input to another target output). Regarding chaotic dynamics, it has been shown that in RNNs with weights $\sim \mathcal{N}(0,\sigma^2)$, increasing $\sigma^2$ leads to a transition from non-chaotic dynamics (the system has non-positive Lyapunov exponents and $\bf x$ settles into a fixed point attractor) to chaotic dynamics (the system has at least one positive Lyapunov exponent and $\bf x$ displays large fluctuations over time, see e.g. [3]). We will incorporate these definitions in the introduction and in footnotes in the revised version.
**Q3. Application to other datasets.**
**A.** To address the reviewer's question, and to demonstrate that the idea of low-tensor-rank learning is not specific to motor learning, we have now performed additional experiments (see General Response to Reviewers for details) which confirm that other forms of learning in neural data and in task-trained RNNs are often low rank.
In general, however, we acknowledge that we do not expect *all* changes in brain activity to necessarily have low tensor rank weights. An important counterexample is the bump attractor network [9]. In this model, each neuron has excitatory (positive weight) connections with its adjacent neurons, and inhibitory (negative weight) connections with distal neurons, e.g. $W_{ij}^{(k)}=w_E$ if $|i-j|<r$, else $W_{ij}^{(k)} = -w_I$, where $w_E,w_I>0$ and $r$ is the radius of the excitatory connections. Because $W^{(k)}$ has banded structure along its diagonal, it will have full matrix rank. Therefore in the case of the brain learning a bump attractor network, and since a high matrix rank in one of the slices of the tensor implies a high tensor rank, we expect $\bf W$ to have high tensor rank. As this is an important counterexample to the low tensor rank learning framework, we will incorporate it into the discussion of our revised paper regarding universality and limitations of the ltrRNN framework.
**References**
[1] Turner, Dabholkar, and Barak. "Charting and navigating the space of solutions for recurrent neural networks." *NeurIPS* 2021.
[2] Schuessler, Mastrogiuseppe, Dubreuil, Ostojic, and Barak. "The interplay between randomness and structure during learning in RNNs." *NeurIPS* 2020.
[3] Kadmon, Timcheck, and Ganguli. "Predictive coding in balanced neural networks with noise, chaos and delays." *NeurIPS* 2020.
[4] Dayan and Abbott. Theoretical Neuroscience. *MIT Press*, 2001.
[5] Chen, Rubanova, Bettencourt, and Duvenaud. "Neural ordinary differential equations." *NeurIPS* 2018.
[6] Pontryagin, Mishchenko, Boltyanskii, and Gamkrelidze. The mathematical theory of optimal processes. *Classics of Soviet mathematics*, 1962.
[7] Miller and Fumarola. "Mathematical equivalence of two common forms of firing-rate models of neural networks." *Neural Computation*, 2012.
[8] Humphreys, Daie, Svoboda, Botvinick, and Lillicrap. "BCI learning phenomena can be explained by gradient-based optimization." *bioRxiv* 2022. https://doi.org/10.1101/2022.12.08.519453
[9] Compte, Brunel, Goldman-Rakic, and Wang. "Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model." *Cerebral Cortex* 2000.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response, I have read it and the other reviews carefully.
In light of the additional experiments, structural changes to presentation, and additional clarifications delineated in the response, I am raising my initial recommendation to accept. | Summary: The presented work investegated the 3-tensor formed by the weight matrices of RNNs across trials and found it is low-rank. The authors also conducted a mathematical proof that the weights learned by gradient-descent on low-dimensioanl tasks are low-rank.
Strengths: First I should ackonwledge that I am not an expert in neural science, and I found it hard to fully understand this paper, so I can only provide limited insights.
Strengths:
- The low-rank property of RNN is interesting and is potentially useful for undertanding the neural dynamics and develop neural network architechures.
- The mathematical framework could be valuable for comprehendingnature of gradient descent.
------
The author rebuttal has addressed most of my questions. However, since I'm not familiar with neural science, I decide to keep my current rating to this paper as borderline reject with the lowest confidence value.
Weaknesses: - The empirical results seems to be limited since the experiments are only performed on motor learning tasks. It's unclear how universal the low-rank property is.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: It might be my problem but I found this paper hard to understand. For example, what does "weights change smoothly over trials" (line 129) mean? I suppose trials are unordered, so what does smoothness mean here? Also, what does "smooth covariance matrix" mean in line 132? $A$ is the covariance matrix of which variable? In the "slow timescale variability in data" (line 136), the variability of what? In "we compared the performance of the full tensor RNN to a static RNN", what is a static RNN? It would be beneficial if the authors provided clear mathematical definitions for both the model and the task, as I currently feel somewhat out of context.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for noting that our framework is valuable for understanding gradient based learning as well as neural dynamics.
**Q1. Generality of low-rank learning dynamics.**
**A.** We have now included three additional task-trained RNN simulations and an additional neural dataset to validate the generality of our results. See Qcae Q4 and General Response to Reviewers for additional details.
**Q2. Lack of clarity for non-specialist readers.**
**A.** We appreciate the reviewer's feedback that our paper may be unclear to readers without expertise in neuroscience. As our work intersects machine learning and computational neuroscience, we believe it is important that their presentation can be understood by both communities. We will therefore clarify the neural data application section of the manuscript by better defining terminology that is not standard in machine learning. We will also highlight earlier in the manuscript the mathematical results and their associated numerical simulations in order to cue potential readers who may be interested in gradient learning dynamics, and will additionally provide better insight regarding their broader significance in machine learning research.
**Q3. Trial ordering.**
**A.** We thank the reviewer for pointing out the lack of clarity of the smoothness constraint we impose on the weights as well as about its relationship to the concept of a *trial*.
One trial corresponds to one repetition of the task of the animal: in Figure 1, this entails moving the cursor from the center of the screen to the target. The different targets in the experiment represent different task conditions. Trials are indeed ordered in the sense that trial $k+1$ occurs after trial $k$ in the original experiment. However each trial is assigned a random task condition.
**Q4. Smooth changes in weights over trials.**
**A.** Over learning, neural activity changes in order to perform the computations necessary for the task. This change in neural activity is generally accepted to be due to changes in the synaptic weights, which evolve over slow timescales due to well-known plasticity mechanisms such as Hebbian learning [1-2]. Because synaptic plasticity is slower than stimulus-driven changes in neural firing, we assume a separation of timescales such that the weight matrix stays constant within a trial but changes from one trial to the next (this separation of timescales has also previously been emphasized within neuroscience [3-4]). This is parametrized by defining the weight matrix on trial $j$ as $W^{(j)} \in \mathbb{R}^N$. When we say that the weights change smoothly over trials (e.g., in line 129), we mean that $W^{(j+1)}-W^{(j)}$ should be small.
**Q5. Smooth covariance matrix.**
**A.** As a reminder, on trial $j$ the weight matrix is $W^{(j)} = \sum_r^R c_r^{(j)} {\mathbf a}_r \otimes {\mathbf b}_r$, where $c_r^{(j)}$ is the $j$th element of $\mathbf{c}_r$. Therefore the trial factors $\mathbf{c}_r$ represent how $W^{(j)}$ changes over trials. Our assumption that the $W^{(j)}$ change smoothly over trials thus corresponds to an assumption that the $c_r^{(j)}$ changes smoothly over $j$.
More practically, we implement smoothness of the weights over trials by constraining the temporal covariance between $W^{(i)}$ and $W^{(j)}$. This can be done by first stacking the trial factors into a matrix $\mathbf{c}=[\mathbf{c}_1, ..., \mathbf{c}_r]$, and by assuming that its covariance matrix of $Cov(\mathbf{c}) \in \mathbb{R}^{K \times K}$ is given by a smooth kernel. In section 2 of the main text and Supplementary Material A we detail how this can be achieved. We will reword this section of the revised paper to clarify this smoothness constraint for the reader.
**Q6. Slow timescale variability in data.**
**A.** In line 136 we refer to the variability of the neural activity of the recording. By slow timescale, we mean changes from trial to trial (i.e., over minutes or even days), rather than rapid changes (i.e., over milliseconds). Recent work has suggested that this separation of timescales captures different biological processes, with rapid changes representing processing of external stimuli and slower changes representing learning [3]. Here we further assume that the variability of neural activity over trials due to learning can be accounted for by changes in $W^{(j)}$, while the variability of neural activity within a single trial is the result of the ODE defining the network for a fixed $W^{(j)}$ (see also our response to Q4).
**Q7. Static RNN.**
**A.** By "static RNN" (line 160) we mean an RNN whose weights remain fixed between trials (i.e., $W^{(i)} = W^{(j)}$ for all $i,j$). That is, we fitted a single set of weights ($N^2$ parameters) for the entire neural recording. We will now refer to this model simply as "an RNN with fixed weights over trials".
**Q8. Mathematical definitions of the model.**
**A.** We agree with the reviewer's general point that having a good grasp of the ltrRNN model and training procedure is important for the reader to understand the rest of the paper. We had originally provided a detailed definition of the model and training procedure in Supplementary Material A while the main text only provided a high-level description. In the revised manuscript we will i) provide a short description of the training procedure in the main text, ii) systematically point at the relevant section in supplementary material iii) provide pseudocode of the ltrRNN fitting procedure.
**References**
[1] Confavreux, Basile, et al. "A meta-learning approach to (re) discover plasticity rules that carve a desired function into a neural network." *NeurIPS* 2021.
[2] Stevenson, Ian, and Konrad Koerding. "Inferring spike-timing-dependent plasticity from spike train data." *NeurIPS* 2011.
[3] Soulat, Hugo, et al. "Probabilistic tensor decomposition of neural population spiking activity." *NeurIPS* 2021.
---
Rebuttal 2:
Comment: Thank the authors for their responce and clarification. Since I'm not familiar with neural science, I decide to keep my current rating to this paper as borderline reject with the lowest confidence value. I encourage the authors to add more background introduction and notation definitions in the future revision. | Summary: The work "Low Tensor Rank Learning of Neural Dynamics" investigate the low-rankness of RNNs with application to neural data, i.e. neural signals of a test subject performing a motor task.
The authors describe that RNNs are of low rank in the trial mode when parametrized as a 3-tensor where one dimension represents the different trials. The findings are validated by showing that a low-rank parametrized RNN is able to fit the motor task with similar accuracy to the full-rank network.
In their theory section, the authors propose two theorems that show the boundedness of the singular values of RNN weight matrices.
Strengths:
This is a solid paper. Particularly interesting is:
- Investigation of low-ranked RNNS as a model for real-world data - in this case neural signals for a motor task.
- Analysis of the gradient dynamics, i.e. the singular values of RNN gradients,
- Extensive numerical tests to validate the propositions, supplemented by code examples.
- Comprehensive Related work section, which is important for such interdisciplinary work.
Weaknesses:
- Some method details for the ltrRNN training need clarification (see questions) In particular, an algorithm or some more mathematical details on tensor format and weight updates are required.
This is the major drawback on this papers presentation, in my opinion.
- The computational cost of training and used hardware should be described.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
- Line 97: Please clarify: Do you ask if the weight tensor itself is low-rank or if the updates are low-rank?
- Line 118: Add a reference or proof (in the appendix at least) for the statement. You say to yourself that this is a non-trivial statement.
- Line 96: please explain the meaning of x and u in the context of your application.
- Line 168: It is not clear, what you mean by your method. I suppose, that you mean the formulation of an order three tensor of low-rank structure as given in the Eq. of Line 95. If so, it is unclear, how the tensor is represented (Tucker format, Tensor-Trains,...) and how the network is trained. Vanilla on the factors, or with dynamical low-rank methods. Can you comment on this? An algorithm, reference, or some equations to explain the method would be necessary.
- Sec4: The authors need to specify
* a) which low-rank tensor format is chosen to save the weight tensors
* b) which update method / low-rank integrator or optimizer is used to compute the weight updates
- Line 168: You say that your method outperforms truncated SVD. My question is, how is truncated SVD training applied? Is a full rank update and afterward a truncated SVD performed in each training iteration, or do you do something else?
- Line 182: "Compared to PCA on neural data, ltrRNNs yield more interpretable visualizations..." First, (minor comment), there is a typo "intepretable". Second, this is an interesting aspect. I think, that training an RNN instead of direct application of PCA means, that you have first a differentiable (and thus smooth) model representation of the neural data, which is then, of course, nicer interpretable, and more visualizable. My questions:
1. Are the neural data smoothed out in some sense, before applying PCA, is the smoothness of Fig.3d a result of the plotting tool or is the neural data smooth, i.e. without large jumps or discontinuities?
2. How comes, that the neural data seem to be somewhat chaotic?
I am by no means an expert in this application field, but find this intriguing.
- Line 225 and the following: You describe how ltrRNN uncovers the low-rank structure. As per Appendix A, it seems that you construct the ltrRNN architecture such that it is low-rank per definition. Thus the network has now another chance to learn low-rank features. Can you comment on this for clarification?
- Line 256: One should also mention [2] as one of the fundamental works of adjoint-based automatic differentiation.
[2] Griewank, Andreas, Walther, Andrea; Introduction to Automatic Differentiation; PAMM; https://doi.org/10.1002/pamm.200310012
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have described various limitations of their work, and proposed concepts on how to deal with them in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback regarding our paper and its potential reach as interdisciplinary work.
**Q1. Clarification of ltrRNN model and training.**
**A.** We agree with the reviewer that having a good grasp of the ltrRNN training procedure is important for the reader to understand the rest of the paper. We had originally provided a detailed definition of the training procedure in Supplementary Material A. In the revised manuscript we will i) provide a short description of the training procedure in the main text, ii) systematically point at the relevant section in Supplementary Material, and iii) provide pseudocode of the ltrRNN fitting procedure.
**Q2. Computational cost of training.**
**A.** We agree that it is useful for the reader to know the computational cost of the method. Supplementary Material A currently includes a table describing the time necessary to fit ltrRNNs of different numbers of neurons (given that this is the main bottleneck) to the neural data application in Figure 1. In the revised manuscript we will point to this table directly when describing the ltrRNN model.
**Q3. Low rank weight tensor or low rank weight updates.**
**A.** We thank the reviewer for pointing out the lack of clarity in line 97 regarding whether the low-rank hypothesis applies to the weight tensor $\bf W$ or the tensor $\bf \Delta W $ that stacks the weight update matrices: $\Delta W^{(k)} = W^{(k)} - W^{(k-1)}$. We indeed mean that $\bf W$ itself is low tensor rank. We will correct this wording in the new version to read: "Here, we ask whether the weights have low tensor rank when they are updated over the course of learning".
**Q4. Reference for the eigenvalue equation in line 118.**
**A.** We thank the reviewer for noticing this overlook. The derivation is already present in Supplementary Material A.3, but we neglected to mention it in the main text. We will add a pointer to the derivation to the revised version.
**Q5. Meaning of x and u.**
**A.** RNNs of the form in line 96 are commonly used in neuroscience as models of networks of biological neurons [1,2], where the state $\bf x$ of the RNN models the membrane potential of the neurons, $\phi(\mathbf{x})$ models their firing rate (i.e. activity), and $\bf u$ represents input activity from other brain regions to the neurons. We will incorporate these definitions into the model description of the main text to aid the reader.
**Q6. Weight tensor representation and training.**
**A.** We agree with the reviewer that the phrasing of line 168 is misleading. By *our method* we mean ltrRNN. We directly parameterize the weights as being low tensor rank $\mathbf{W} = \sum_{r=1}^R \mathbf{a}_r \otimes \mathbf{b}_r \otimes \mathbf{c}_r$ and perform gradient descent (more precisely, ADAM) on $\mathbf{a}_r, \mathbf{b}_r, \mathbf{c}_r$. For this, we use the framework of neural ODEs which allows backpropagating through solving an ODE [3].
We believe that the addition of the training procedure pseudocode will help clarify this.
**Q7. Truncated SVD**
**A.** In line 168, we agree that this wording is unclear: we apply SVD and PARAFAC to the neural data itself rather than the weights. When we say SVD and PARAFAC are outperformed by ltrRNN we mean that ltrRNN has lower mean squared error in the test dataset than either SVD or PARAFAC models of the same rank. We included these as a baseline as it is very common in neuroscience to analyze low rank SVD or PARAFAC [4] models fit to neural activity.
**Q8. Interpretable visualizations with ltrRNN**
**A.** We first thank the reviewer for noting the typo on line 182; we will correct this in the revised paper. In Figure 3, we followed common practice [5] of convolving the spike times with a Gaussian kernel, with $\sigma=40$ ms (Supplementary Material B for details). However, note that in the Supplementary Material, we also illustrate an application of ltrRNN which is fitted to maximize the log-likelihood of the spike counts assuming a Poisson distribution (Supplementary Figure 5).
There is a rich literature exploring the computational implications of chaotic dynamics in RNN models in neuroscience (e.g., [1]). However, the amount of variability in neural data (i.e., from trial to trial) precludes any definitive statement regarding whether fluctuations in recorded neural activity is due to the chaotic regime or noise. This question is especially fraught for RNNs due to technical issues in their training procedures that can prevent them from inferring chaotic dynamics from time series [6].
**Q9. Uncovering low-rank structure by definition**
**A.** The ltrRNN is indeed low-rank by definition. We thank the reviewer for pointing out that this sentence is misleading. In the revised paper we will rewrite this sentence as: "LtrRNNs enables inference of the tensor rank of the weights from the neural activity through crossvalidation."
**Q10. Reference to adjoint-based automatic differentiation**
**A.** We thank the reviewer for pointing us to this work which we were previously unaware of. We will incorporate the citation into the revised paper.
**References**
[1] Kadmon, Timcheck, and Ganguli. "Predictive coding in balanced neural networks with noise, chaos and delays." *NeurIPS* 2020.
[2] Valente, Pillow, and Ostojic. "Extracting computational mechanisms from neural data using low-rank RNNs." *NeurIPS* 2022.
[3] Chen, Rubanova, Bettencourt, and Duvenaud. "Neural ordinary differential equations." *NeurIPS* 2018.
[4] Soulat, Keshavarzi, Margrie, and Sahani. "Probabilistic tensor decomposition of neural population spiking activity." *NeurIPS* 2022.
[5] Park, Seth, Paiva, Li, and Principe. "Kernel methods on spike train space for neuroscience: a tutorial." *IEEE Signal Processing* 2013.
[6] Mikhaeil, Monfared, and Durstewitz. "On the difficulty of learning chaotic dynamics with RNNs." *NeurIPS* 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers and clarifications for application specific jargon.
As the main weakness of the paper - the very application specific wording and method presentation, that was also pointed out by other reviewers - is addressed, my initial decision to accept the paper is reinforced.
With the additional explanations I think this work is a compelling read for the NeurIPS community and helps bridging the gap between method and application for low-rank techniques. | Summary: In this paper, the authors explore the tensor rank of learning in artificial and biological neural networks. They showed that learning leads to low-tensor-rank weight updates, and derived upper bounds on the singular values of gradient dynamics of nonlinear RNNs, as well as on the matrix and tensor ranks in the linear case. Experimental results effectively support their model's conclusion.
Strengths: 1. Previous works have shown weight matrices in well-trained RNNs are low-rank. This paper focuses on whether tensors derived from weight matrices over the process of training are low-tensor-rank, which is meaningful and has a strong motivation.
2. The paper supports its results empirically.
3. The theoretical statements are detailed and valid.
Weaknesses: 1. From the perspective of researchers outside this field, a more structured presentation may be more conducive to understanding the contribution of work and promoting work.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: None.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes, the authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for noting that our paper has a *strong motivation*, and that our claims are supported both by our *empirical* and our *mathematical results*. We agree that providing a clear presentation of these results to researchers unfamiliar with neuroscience is important for the broader impact of our work. We briefly summarize the structural changes to the paper that we will make in response to all reviewers' feedback:
* **Mathematical results.** As pointed out by other reviewers, the bounds we mathematically derive regarding the rank of the gradient of recurrent neural networks may be of interest to the broader machine learning community, but only come late in the paper. To address this, we will summarize them and their associated simulations earlier in the paper.
* **Terminology.** Some terms such as "trial" in our neural data application are not standard in machine learning. To address this, we will define neuroscience-specific terminology more systematically.
* **Main contributions.** Although the different sections of our paper cohesively support our main hypothesis that learning is low tensor rank, we aknowledge that a subset of them might be more of interest to any particular community at NeurIPS. To address this, we will add a main contributions section at the beginning of the paper highlighting its structure so that readers can navigate to the section that is most relevant to their research.
* **LtrRNN pseudocode**. In light of multiple reviewers' comments and suggestions, we believe that understanding the fitting procedure of ltrRNN is key to understand the paper as a whole. To address this, we will i) include a short description of the ltrRNN fitting procedure in the main text ii) include pseudocode of ltrRNN fitting procedure.
* **Supplementary material.** Many of our results rely on Supplementary Material (e.g. all theorems' proofs). To address this, we will more systematically reference specific sections of Supplementary Material.
Overall, we believe that these structural changes will significantly improve the accessibility of our work. Nevertheless, any additional suggestions the reviewer may have to improve readability would be very welcome, as we want to make sure that our work is accessible to readers from across different backgrounds. | Rebuttal 1:
Rebuttal: We thank the reviewers for their helpful and supportive comments. We are pleased to have received positive feedback regarding the novelty and interest of our submission from the reviewers, several of whom are self-described non-experts in neuroscience. We believe this highlights the potential for our work to be relevant to many research areas across the broader machine learning community.
We have made two substantial improvements to the paper in response to the reviewers' constructive criticisms. Results from our new analyses can be found in **Figures R1 and R2** of the associated 1-page PDF.
**New simulations.**
To investigate the generality of our results, we have now expanded our analyses by testing the low-tensor-rank learning hypothesis on three additional RNN models commonly used in neuroscience. These include:
* Sensory evidence accumulation task [1] where an RNN must learn to integrate a noisy instantaneous input (**Figure R1 i**).
* Contextual decision making task [2] where an RNN must decide which of two noisy inputs to integrate (**Figure R1 ii**).
* Working memory task [3] where an RNN must maintain a representation of a past input over time in order to compare it to a later input (**Figure R1 iii**).
Along with the motor adaptation example in the original paper, these four tasks span perceptual, motor and cognitive processes. Moreover, RNNs trained on these tasks have been proposed as models of different areas of the brain involved in those processes (e.g., [4,5]). Thus, these new experiments constitute a more systematic validation of the low-tensor-rank weights hypothesis we put forth.
We followed the same procedure to determine the tensor rank of learning for these tasks as for the task-trained RNN in our original submission (Figure 4). That is, we first trained an RNN on each of these tasks using gradient descent, then used PARAFAC to determine the tensor rank of the resulting neuron $\times$ neuron $\times$ iteration tensor of weights over learning. *In each of these tasks we found that the variance explained indeed saturated at low tensor ranks* (at $R=1,4,$ and $3$; **Figure R1 c**). We will add these results as a supplementary figure to the revised version of the paper.
**New data application.**
In addition, we applied the ltrRNN model to a new dataset consisting of neural recordings in mouse visual cortex during a perceptual learning task [6] (**Figure R2 a**). As a reminder, in the ltrRNN we fit an RNN to neural activity with the explicit constraint that the weight tensor has tensor rank $R$ (Figure 1; Supplementary Material A). We found that the performance of ltrRNN in this new dataset saturated at low ranks around $R = 3$ or $4$ (**Figure R2 b**). The fitted inputs distinguished rewarded from non-rewarded stimuli after the time at which the sensory stimulus was presented (**Figure R2 c**). Some of the trial factors appeared to track slow changes over learning while others remained relatively stable over trials (**Figure R2 d**). *These new results complement our original application to data from monkey motor cortical recordings during an adaptation task, demonstrating that the ltrRNN framework is not specific to a particular kind of learning, brain region, or species.*
***Overall we believe that our new simulations and data analyses, together with our mathematical results which give generic bounds on the ranks of gradient-based learning, validate the breadth of the ltrRNN framework.***
**Clarity and terminology.**
Several reviewers also noted that parts of the manuscript were difficult to understand due to (1) some details or definitions being hidden in the supplementary information and (2) undefined terminology (often, jargon specific to the neuroscience community). We strongly agree that it is important that our manuscript be accessible to the broader machine learning community, as we believe many of our results are of more general interest, beyond applications in neuroscience. To address this, we will make the following changes to the paper:
* To avoid overloading the reader, we had originally hidden some of the model specifications and training in the Supplementary Material. The reviewers fairly pointed out that this made it difficult to understand the paper. We will correct this by 1) incorporating many of these technical details back into the main text, and 2) by including pseudocode of the ltrRNN fitting procedure.
* To aid the non-specialist reader, we will add definitions of neuroscience-specific jargon.
* We will add a *main contributions* section to highlight the two principal components of our paper: 1) ltrRNN as a method for fitting neural data, and 2) the theoretical results regarding gradient learning dynamics. We believe this section will help readers from across different communities of NeurIPS better navigate the paper.
We have provided detailed responses to individual reviewers' questions below.
**References**
[1] Zoltowski, Pillown, and Linderman. "A general recurrent state space framework for modeling neural dynamics during decision-making." *ICML* 2020.
[2] Valente, Pillow, and Ostojic. "Extracting computational mechanisms from neural data using low-rank RNNs." *NeurIPS* 2022.
[3] Schuessler, Mastrogiuseppe, Dubreuil, Ostojic, and Barak. "The interplay between randomness and structure during learning in RNNs." *NeurIPS* 2020.
[4] Feulner, Perich, Chowdhury, Miller, Gallego, and Clopath. "Small, correlated changes in synaptic connectivity may facilitate rapid motor learning." *Nature Communications* 2022.
[5] Mante, Sussillo, Shenoy, and Newsome. "Context-dependent computation by recurrent dynamics in prefrontal cortex. *Nature* 2014.
[6] Khan, Poort, Chadwick, Blot, Sahani, Mrsic-Flogel, and Hofer. "Distinct learning-induced changes in stimulus selectivity and interactions of GABAergic interneuron classes in visual cortex." *Nature Neuroscience* 2018.
Pdf: /pdf/a0c12f8f6230b693adfdb2cae1d16cc64cf6de98.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense | Accept (poster) | Summary: The paper can be split into two parts. In the first part, the authors propose a paraphrasing-based attack that circumvents various AI-generated text detectors. The authors introduce DIPPER, an 11B parameters Transformer model obtained by fine-tuning T5-XXL. By using DIPPER to create paraphrases from texts generated by LLMs, the authors show that they can successfully evade various commonly-used detectors. In the second part, the authors propose a retrieval-based generated text detector that can successfully detect instances of paraphrased generated text.
Strengths: - The paper is well-written and easy to follow.
- The authors introduce DIPPER, a paraphrasing model based on T5-XXL. The authors show that DIPPER can generate high-quality paraphrases and have tested the model in a human trial included in the Supplementary materials.
- The paraphrasing attacks are effective against the models tested.
- The authors have an extensive discussion about the limitations of their retrieval-based system.
Weaknesses: - The paper's biggest weakness is the generated text detection via retrieval methods since many assumptions must be made for it to work correctly.
- First, one needs access to all of the text generated by the model we want to test against. This is significantly limiting since the data would probably be available only to the entity that is maintaining the LLM API. Maintaining such a big database of generated text would also be expensive from a storage perspective, and computationally expensive for the retrieval phase.
- Second, using a similarity score to compare the paraphrased text with retrieved texts, one must assume that the similarity between the generated text and its paraphrased variant would be high. However, the similarity score could also be large if the two texts have similar semantics. This would be true if one used something similar to the cosine similarity between two text embeddings obtained by a neural encoder [[1]]. If one maintains a significantly large and diverse database of generated text, the chance of it containing an entry that is semantically similar to a candidate text grows, leading to FP predictions.
- Third, the method would only be viable for closed, proprietary models, and would not apply to open-source models, since it is impossible to collect a database of all the text generated by an open-source LLM that is hosted by multiple people and institutions.
- As far as I understand, the authors have used human-generated continuations only to adjust the detection thresholds to maintain a 1% FPR (lines 171-172). If this is the case, the evaluation methodology is somewhat flawed due to the points I've previously made. In my opinion, a better evaluation methodology would be using some parts of the human-generated data to fix a low FPR, and also have human-generated data with similar semantics to the LLM-generated one in the test set.
Overall, I believe that the paper is nice and explores a potential direction that could be valuable in some use-cases, but the flaws and limitations previously mentioned make me question the practical efficiency of such a detector deployed at scale in the real world. I believe the paper could be much stronger if:
a) The authors would carefully build a large and semantically diverse database of generated texts and would add human-generated texts to the test dataset. The detection method would be significantly more convincing if it did not fail in this scenario.
b) The authors would design some similarity metric that would result in a large score only if the candidate text is some paraphrase of the target. This could potentially greatly reduce the number of false positives.
[1]: https://arxiv.org/pdf/1301.3781.pdf
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: - I would like to see what happens if text generated by an LLM is kept in the database and some semantically similar entries generated by humans would be present in the test dataset.
- Following the first point, what would happen if you used DIPPER to paraphrase the human-generated text? Would there be higher similarity scores for the paraphrased samples if you have some semantically similar generated text in the database?
- Have the authors tried any similarity metric tailored for paraphrasing? Such a contribution would make the paper significantly stronger, in my opinion.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 3 good
Limitations: - The authors have extensively discussed potential limitations in the main paper and the Supplementary materials for both their DIPPER model and the retrieval-based detection systems. While the limitations regarding the computation necessary for a large-scale retrieval system can be somewhat alleviated, as discussed in section B.2 of the Supplementary, I believe the similarity-based scoring approach to be a significant limiting factor.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback! We are grateful that the reviewer appreciated our writing, paraphraser, the paraphrasing attacks on detectors, and discussion on the limitations of retrieval.
The reviewer voiced concerns about the practicality of retrieval as a detection algorithm and mentioned technical challenges in implementing it at scale. We address these concerns below.
>similarity score could also be large if the two texts have similar semantics...if one used encoder like word2vec…paper would be stronger if...authors design some similarity metric that would result in a large score only if candidate text is some paraphrase
To the best of our understanding, the reviewer is referring to topical similarity between two texts, which could lead to high cosine similarity between non-paraphrases. This may be true for methods like word2vec, but our paper uses the P-SP embedding method which has been **explicitly trained for paraphrastic similarity using parallel data**. P-SP [4] achieves state-of-the-art performance on the STS benchmarks [1], which include plagiarism detection, measuring paraphrastic similarity on varying degrees of semantic overlap. To further illustrate the robustness of P-SP to non-paraphrases sharing topics, we conducted an experiment on the Par3 dataset. We found that the average P-SP score of actual human paraphrase pairs in Par3 is 0.76. In contrast, the P-SP of random pairs of paragraphs from the same book is just 0.09 (topically similar but not paraphrases).
We have also added a discussion on “semantic collisions” in the “Global Rebuttal”. Overall, the chance of collisions between topically similar non-paraphrases is low because
* the likelihood of semantic divergences between pairs exponentially increases with length
* the most effective retrievers do not solely rely on semantics
* unperturbed text will always have a 100% detection rate.
>The paper would be stronger if authors would add human-generated texts to test set…with similar semantics to LLM-generated data.
We have effectively done this in Sec. 5.2b, 5.3. The database in 5.2b contains three generations for the same prompt (hence strong topical overlap), and the test set includes the human-written output for the *same prompts*. Similarly, in 5.3, we have hundreds of database entries from the same PG19 book which closely share topical content. Our test set includes the human-written continuation for those prompts.
>the authors have used human-generated continuations only to adjust the detection thresholds to maintain 1% FPR.. better methodology would be using parts of the human-generated data to fix a low FPR, and keep rest in test set.
We do not think using human text for threshold adjustment removes them from the test set. A high FPR will push up the threshold, which in-turn will lower the true positive rate. This is equivalent to taking the y-coordinate for x=1% in a classifier’s ROC plot (TPR vs FPR). Moreover, in Fig 8 (Appendix), we plot the full ROC plots which consider every possible detection threshold. We find retrieval to be vastly superior and robust to paraphrasing compared to other methods on different thresholds.
While we could use a random fraction of human-written text for threshold adjustment and evaluate the other fraction’s FPR, we expect it to result in a 1% +/- delta FPR since it’s in-distribution data. To validate this hypothesis, we empirically evaluated it in one setup (PG19, BM25), using 50% of the human text to estimate the threshold. We found the FPR on the other 50% of human text to be 0.8-1.2% across runs, and we expect variance to reduce with a bigger human dataset.
However, we acknowledge that our paper does not test the out-of-distribution generalization of threshold adjustment (do thresholds for human dataset 1 give low FPR on human dataset 2?). We will do this in the next version.
>Maintaining such a big database of generated text would be expensive from a storage perspective and computationally expensive
In the “Global Rebuttal”, we provide a detailed analysis estimating these requirements for a ChatGPT-scale database. In summary, we estimate them to be relatively small. We estimate the ChatGPT database needs 5TB of storage space per month, and require just 100 seconds per retrieval on a CPU-only Macbook Pro. This is trivial compared to the scale at which Google Search (100,000+ TB index) and ChatGPT (10-15 seconds on a powerful 8x A100 GPU server) are currently operating. Moreover, major LLM providers (like OpenAI, Google) already have the infrastructure to host services like Google Search and ChatGPT at scale.
>One needs access to all of the text generated by the model…limiting since the data would be available only to the entity that is maintaining the LLM API
We agree that our detector can only be implemented by the LLM API provider, and have acknowledged this in Appendix B.1. However, major LLM providers may be incentivized to implement this detector, since it can help claim innocence in potential lawsuits about the origin of malicious AI-generated content. Moreover, there are increasing government discussions on regulating AI-generated content, and both the US and EU government have recommended major LLM providers to make their AI generations detectable [2, 3].
>the method would only be viable for closed, proprietary models, and would not apply to open-source models
This is a valid concern, and we address it in the “Global Rebuttal”. Overall, we agree with the reviewer that our detector is restricted to closed-source LLMs. However, most major LLM providers are hosting their LLMs behind closed APIs. Also, watermarking, the most promising alternative to retrieval, suffers from the same limitation. Other detectors either perform very poorly or are brittle against paraphrases.
[1] https://aclanthology.org/S16-1081
[2] https://tinyurl.com/usgovt-ai
[3] https://tinyurl.com/eugovt-ai
[4] https://aclanthology.org/2022.emnlp-demos.38
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts in addressing the concerns raised by me and the other reviewers in the rebuttal.
Upon reviewing the clarifications, I believe that this work can serve as a solid foundation in the efforts against paraphrasing attacks and can be a good contribution to NeurIPS. Consequently, I've adjusted my rating from 4 to 6.
Nonetheless, I maintain that future iterations of this research should encompass more comprehensive experiments and a detailed examination of the datasets employed.
I also suggest that for the camera-ready version of the paper, the authors should clarify the details regarding the human-generated test data in Sec. 5.2b and 5.3. As it stands, the test data composition remains somewhat ambiguous to me. | Summary: The authors developed a powerful paraphrase generation model called DIPPER to test the robustness of AI text detection algorithms. DIPPER successfully evaded several detectors by paraphrasing text generated by large language models. To improve detection, they proposed a defense mechanism based on retrieving similar generations from a database, which detected 80% to 97% of paraphrased text while misclassifying only 1% of human-written sequences as AI-generated.
Strengths: 1. Testing the robustness of existing detectors of AI-generated text is very interesting and important.
2. A defence method is proposed to handle the paraphrasing problem discovered in this paper.
3. The experimental results are solid and promising.
Weaknesses: 1. The proposed method needs the store of all AI-generated texts. I wonder whether it is practical for popular LLMs which many users and queries, like ChatGPT.
2. More detection methods should be incorporated in experiments.
3. It would be better if more datasets and tasks are used.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is it practical for popular LLMs which many users and queries, like ChatGPT to store all their generated texts?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their useful feedback and support for our paper! In particular, we appreciate the reviewer for highlighting that 1) we study important research questions on robustness of AI-generated text detection; 2) our defense mechanisms for our attacks; 3) our paper shows solid experimental results.
The reviewer voiced some concerns about storage requirements, and requested experiments on more detection methods and datasets. We address these below.
> The proposed method needs the store of all AI-generated texts. I wonder whether it is practical for popular LLMs which many users and queries, like ChatGPT.
> Is it practical for popular LLMs which many users and queries, like ChatGPT to store all their generated texts?
We think it is very much practical at ChatGPT’s current usage rate (1.5-2B monthly visits) to store AI-generated outputs. We estimate it will take about 5TB storage space per month (detailed calculations in the “Global Rebuttal”), similar to the size of a personal portable hard-disk. Moreover, ChatGPT and Google Bard already store user’s chat history for improving their models with RLHF [1, 2]. Moreover, the major LLM providers like Google and OpenAI already have the storage infrastructure in-place which they use to host services like Google Search (100,000TB+ index size) and ChatGPT / GPT3.5. In our “Global Rebuttal” we also extensively discuss scalability in terms of compute and accuracy.
> More detection methods should be incorporated in experiments.
> It would be better if more datasets and tasks are used.
Our experiments already cover a comprehensive set of five detection algorithms (DetectGPT, watermarking, OpenAI classifier, GPTZero and RankGen). At that time our research was conducted, we included every state-of-the-art contemporary AI-generated text detector we could find. Moreover, our experiments cover two practically relevant tasks modern large language models are used for: long-form question answering (six domains) and open-ended generation (two domains). Finally, our experiments are performed on outputs from three large language models (GPT2-XL, OPT-13B, GPT3.5 davinci-003), to provide a wide diversity of LM sizes and properties.
Nevertheless, we acknowledge that our scaled retrieval experiments (Section 5.3) cover outputs from only one kind of language model in two domains. As promised in the “Global Rebuttal”, we will conduct similar experiments on other AI-generated text databases like GPT4All and ShareGPT, which while smaller in size, are more diverse in nature than the RankGen training data.
[1] - https://openai.com/blog/new-ways-to-manage-your-data-in-chatgpt
[2] - https://support.google.com/bard/answer/13594961
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thank the authors very much for the detailed response. It well addressed my questions mentioned in my review. | Summary: This paper investigates the robustness of AI-generated text detection algorithms to paraphrasing. The authors train a language model to paraphrase text in an attempt to evade detection algorithms. The proposed model leverages longer contexts than existing sentence-level paraphrasers and offers users control over content diversity and reordering. After demonstrating the vulnerability of current detection algorithms to the paraphrasing model, the authors propose a retrieval-based defense in which the outputs of a language model are stored and matched against possible AI-generated text. The authors find their retrieval-based defense achieves superior robustness against paraphrasing when compared to existing watermarking and classification defenses.
Strengths: The training scheme for DIPPER is clever and well motivated (e.g. control over content re-ordering and diversity). I think the paper benefits from including both an improved attack on AI text-detection and a simple but interesting defense.
Very thorough experiments.
I appreciate the discussions of the limitations of retrieval-based defenses in the appendix.
Weaknesses: The authors do not appear to evaluate any non-DIPPER paraphrasers in their experiments. The claim in line 232 that "our ablations in Appendix C show that these paraphrasers have lower quality and are less compatible with the prompt as DIPPER paraphrasers" seems misleading, as DIPPER ablations are not the same as evaluations of existing off-the-shelf paraphraser models. While the paper presents strong results for DIPPER, I think the authors should either re-word this claim to be more clear or provide direct experimental comparisons to other paraphrasers.
Paraphrasing attacks might not be considered effective if they degrade the original text in certain ways (e.g. introduce grammatical errors, change the "tone" or "voice" of the text) even if they preserve some notion of semantic similarity. The results in the main paper heavily emphasize the semantic similarity metric of Wieting et al. as a measure of paraphrased text quality, although paraphrasers are evaluated under other metrics (e.g. human evaluations, perplexity) in the appendix. As things stand, it is not immediately clear in the main body of the paper whether DIPPER strongly preserves the quality of the original text beyond some notion of semantic similarity. This should probably be clarified by summarizing the results of the additional quality evaluations from Appendix C.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: If the underlying language model is itself capable of diverse generations, is it possible that we might see a "saturation" problem in certain topic areas? E.g. if hundreds of students use the language model to generate one-paragraph summaries of the Gettysburg address, all of which are stored in the database, will this provide a kind of "semantic coverage" of the topic such that even human-generated summaries of the Gettysburg address would be flagged as AI-generated?
I think the authors' proposed retrieval defense is promising, and their proposed improvements in Appendix B.2 are interesting, but I'm not sure whether the paper effectively argues for the feasibility of retrieval-based defenses at scale. In Appendix B.1, line 680, the authors state that "At a conservative rate of 5M queries a day, [an AI text generation] database will have almost two billion entries in a year." While the authors perform experiments to validate their retrieval-based defense at a scale of 15 million entries, it is not immediately clear how this performance would extrapolate to a database three or more orders of magnitude larger (as in their hypothetical). Along these lines, it would be helpful if the authors could point to one or more analogous text-retrieval systems capable of operating at the aforementioned hypothetical scale with the precision required for AI text detection.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback and support for our paper! In particular, we are grateful for reviewer’s appreciation for our 1) novel paraphraser training algorithm DIPPER; 2) attacks on AI-generated text detectors using DIPPER; 3) novel defense mechanism and a discussion of its limitations; 4) thorough experiments in all these fronts.
The reviewer voiced a few concerns about our paraphraser evaluations, and had a few questions about the scalability of our retrieval-based defense mechanism. We address them below.
> claim in line 232 that "our ablations in Appendix C…" seems misleading, as DIPPER ablations are not the same as evaluations of existing off-the-shelf paraphraser models.
We agree with this concern and will reword L232 to better reflect our contribution. In particular, our stance is that we expect non-DIPPER paraphrasers to also evade detection. However, we expect DIPPER will produce higher quality paraphrases than a non-contextual sentence-level alternative (as shown in our Appendix). Another advantage of using DIPPER is its fine-grained control knobs to vary diversity to find a sweet spot of retaining semantics, while fooling detection systems. We did not find these traits in other off-the-shelf paraphrasers, as discussed in Appendix D.1. We will move some text from the Appendix to the main body to further highlight this.
> Paraphrasing attacks might not be considered effective if they degrade the original text in certain ways (tone, style).. As things stand, it is not immediately clear in the main body of the paper whether DIPPER strongly preserves the quality of the original text.. This should probably be clarified by summarizing the results of the additional quality evaluations from Appendix C
As the reviewer pointed out, we have included several additional automatic and human paraphrasing evaluations in Appendix C to showcase the strengths of DIPPER. We will summarize these results in the main body of the paper. We agree with the reviewer that paraphrases are likely to modify style as shown in [1]. However, as mentioned in the previous paragraph, an important property of DIPPER is to provide fine-grained diversity control. Our lexical and order diversity control knobs allow an attacker to modify a generation *just enough* to evade detection: lower the value of the knobs, lesser the stylistic modification.
> If the underlying language model is itself capable of diverse generations, is it possible that we might see a "saturation" problem in certain topic areas (the Gettysburg Address)?
This is a great point about semantic overlap within the database on popular topics. We address this in detail in the “Global Rebuttal” under “semantic collisions”. In summary, we believe that the chance for semantic collisions is low because:
* retrieval-based detection uses pairwise comparisons, and the likelihood of semantic divergences between pairs exponentially increases with length;
* the most effective retrievers do not solely rely on semantics;
* unperturbed text will always have a 100% detection rate.
> I'm not sure whether the paper effectively argues for the feasibility of retrieval-based defenses at scale.
We extensively discuss this in the “Global Rebuttal” to all reviewers, on axes of storage, compute and accuracy. Overall, we are quite optimistic about the feasibility of retrieval in terms of storage requirements (5TB for a database with 1.5-2B generations) and compute requirements (just 100 seconds for retrieval against a 1.5B database on a Macbook Pro). Importantly, the major players providing LLM API services (Google, OpenAI, Microsoft) have vastly superior computational infrastructure already in place to power services like Google Search, ChatGPT and Bing at scale.
In terms of accuracy, we are optimistic looking at our scaling curves (Figure 5a shows just 0.8% drop from 1M to 10M in BM25). Our experiments already use the largest publicly available AI-generated dataset (to the best of knowledge), and collecting a billion-scale dataset with ChatGPT will cost $1M.
> it would be helpful if the authors could point to one or more analogous text-retrieval systems capable of operating at the aforementioned hypothetical scale
Traditional information retrieval has a slightly different setup compared to our retrieval-based detection. Queries tend to be more information seeking (rather than looking for exact matches or paraphrases), and recall@k is also important besides precision. Nevertheless, we think Google Search is an excellent example of text-retrieval operating at scale, since Google’s search index is over 100B webpages [3]. In academic literature, we found a few examples of experimental setups operating at a billion-scale [4, 5]. [4] shows a precision@1 of 60% for time-aware retrieval.
[1] https://arxiv.org/abs/2010.05700
[2] https://eval.ai/web/challenges/challenge-page/1897/leaderboard/4475
[3] https://www.google.com/search/howsearchworks/how-search-works/organizing-information
[4] https://sobre.arquivo.pt/wp-content/uploads/creating-a-billion-scale-searchable-web-archive.pdf
[5] https://arxiv.org/abs/2110.06125
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I thank the authors for their detailed reply to my review and for their general rebuttal. I think the authors have adequately addressed my concerns regarding the technical challenges of a retrieval-based defense, and I agree that the scale of the retrieval experiments performed is reasonable given the presumptive cost of creating a novel LLM text database. I think the paper will be significantly stronger with the authors' proposed modifications to address reviewer critiques (many of which were shared by multiple reviewers, and thus would likely be raised by readers). I have adjusted my score accordingly. | Summary: The submission "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense" investigates detection methods for text output generated by modern large language models. The contribution of this submission consist of two parts. First, the submission describes, trains and provides a state-of-the-art language paraphrasing model. This paraphrasing model is then used to critically evaluate a number of detection schemes for their robustness against paraphrasing, finding that many modern detection approaches are not robust to paraphrasing attacks that leave the semantics of the original text unchanged, but modify its wording.
In a second part, the authors use this finding as motivation to describe a detection strategy based on semantic retrieval.
Strengths: This is a great submission. It that starts out with a clear hypothesis about the detection of LLM-generated text and executes its two mechanical parts - First generating the paraphraser, and then constructing the retriever, very well.
In more detail, the construction of a paraphrasing data through re-alignment of paragraph-level parallel translations is novel to me and quite interesting. The authors make the reasonable choice of finetuning existing encoder-decoder models for paraphrasing based on this corpus, a choice that they experimentally validate to work well.
Then, they rightfully point out that the detection of generated can be cast as a retrieval problem, which they show indeed simplifies the problem, and allows for the leverage of existing similarity search tools. Just in case, I explicitely want to point out, that I don't consider the use of existing similarity search tools a weakness, but instead a correct conclusion based on the insight of the authors that the problem can understood in this manner. In a detailed evaluation, the authors compare LLM detection based on retrieval with other approaches, finding this to be a robust choice.
Finally, I especially want to highlight the immediate practical value that the paraphrasing model would have as a tool for the community. Previous papers have often only hinted at the possibility of paraphrasing, or used general-purpose APIs to attempt to provide accurate paraphrases, and this has limited evaluations of the very practical threat model of paraphrasing in the literature. With the release of this paper and the paraphrasing model provided here, the authors would provide a practical tool to the community that will suddenly makes investigation of this threat model much more feasible.
Weaknesses: I only see a few minor weaknesses, which I will point out below. These mainly orbit around questions of "why", which this submission does not always contain.
* Why would these detectors break under paraphrasing? While we do observe that the detectors based on outlier detection and classifier methods break after paraphrasing, aren't some of these detectors kind of correct in their assessment that the text is not written by the original model anymore? This does not apply to all detectors, but those that claim to only detect a specific model (or model family) are not entirely incorrect? To phrase this question differently, would paraphrasing also break detectors that classify "generic" machine text, and are not specialized to detect particular models or model families?
* Why is FPR low for detection?There seems currently no way to estimate the reliability of the retrieval-based detector formally and to guarantee a certain FP rate to my understanding? Is this a principal limitation, or could the method be modified to use a threshold not calibrated from existing data? Ideally, in a way that includes the size of the corpus, allowing for estimates of performance of this approach at larger scales than can be tested empirically in this work?
* Is there a corpus size at which retrieval stops being meaningful, if it is based only on semantics? I could imagine, for the sake of the argument, that many school essays that summarize existing arguments about some fact or historical event on per-paragraph level, might be semantically the same. These would also be semantically similar to some output when the model was at one point queried with this topic separately?
In any case, the submission stands strong based on the empirical evidence it provides, and some of these questions might be left to future work.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: A few smaller questions:
* What about random substring detection? To my understanding, the submission only tests queries where the back-end of the query is removed. Would a detection of a random substring from the query perfom equally well? Bonus question: What about retrieval for several substrings from different corpus entries, i.e. what if a document is constructed based on text generated by mixing and matching multiple queries to the language model API?
* How large is the Par3 dataset? Could the authors briefly comment on dataset size and finetuning compute required for the paraphraser in the main body?
* I found the different FPR for DetectGPT in a part of Table 1 somewhat confusing. I see that the authors want to be polite here, but to me it would be clearer to indicate that this detection method really scores 0% at the given FPR. A 20% table could be included in the appendix, although I agree with the authors that there is no practical value to detection schemes with this FPR.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: There are a number of ethical implications to storing all user-generated text on the server for retrieval, concerning data privacy. It would be great if the authors could briefly comment on this also in the main body, as this question is currently only discussed in Appendix B.1.4.
One question I have, related to this discussion of ethics, (this is a bit of an aside, the paper is great without answering this), is how retrieval would interact with regulations that include the "right-to-be-forgotten", like GDPR. Could a user ask the company to delete their generations, to prevent detection? Or should it be argued that detection is "sufficient cause" to stop deletion of user data?
I also think there are soom possible solutions to this question, where reconstruction of x_i from y_i can be ruled out, possibly via bloom filters, or (minimal) differential privacy?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed and thoughtful feedback, and for strongly supporting our paper! In particular, we are grateful to the reviewer for supporting our 1) novel paraphraser and its open-sourcing; 2) our experimental analysis testing the robustness of AI-generated text detectors; 3) our casting of AI-generated text detection as a retrieval problem and its downstream robustness.
The reviewer further asks a number of interesting questions related to paraphrasing as an attack and retrieval as a defense. We address them below.
> would paraphrasing also break detectors that classify "generic" machine text, and are not specialized to detect particular models
Our experiments do show that paraphrasing drops performance in all model-specific detectors we tried (watermarking, DetectGPT) and two out of three model-agnostic detectors (OpenAI, GPTZero). However, it’s hard to fairly compare the relative effect of paraphrasing on these two classes due to a large performance gap between them in the first place. In Table 1, we see that even without paraphrasing, model-specific detectors significantly outperform model-agnostic detectors. OpenAI has even gone so far to remove their model-agnostic classifier from their website due to low accuracy [1]. Overall, we observe that model-specific watermarking is the most robust to paraphrasing attacks despite the large drop. It’s also important to note that paraphrased text is more “AI-generated” in some sense. One of our model-agnostic detectors (RankGen) thinks it is indeed so (7% vs 1% TPR for GPT3.5), but its low overall TPR makes it unusable as a detector.
It is technically true that model-specific detectors are not designed to detect outputs not entirely generated by the model itself. However, given their strong performance over model-agnostic methods, and the large risk of perturbation attacks (human-edited or automatically-edited), we think it’s important for model-specific detectors to bake perturbation robustness in their design for better downstream usability.
> There seems currently no way to estimate the reliability of the retrieval-based detector formally and to guarantee a certain FP rate to my understanding?
Similar to DetectGPT / classifiers, we believe the threshold needs to be estimated empirically on the underlying data distribution. We will add some analysis on this in the next version, and analyze the out-of-distribution robustness of thresholds chosen on a subset of human data. A formal relationship between FPR and the threshold (like in watermarking) may be possible using information about the density of the retrieval database in the semantic vector space. We leave this exploration for future work.
> Is there a corpus size at which retrieval stops being meaningful, if it is based only on semantics?…school essays that summarize existing arguments about some fact or historical event…might be semantically the same.
This is a great point about semantic overlap within the database on popular topics. We address this in the “Global Rebuttal” under “semantic collisions”. In summary, we believe that the chance for semantic collisions is low because,
* retrieval-based detection uses pairwise comparisons, and the likelihood of semantic divergences between pairs exponentially increases with length
* the most effective retrievers do not solely rely on semantics
* unperturbed text will always have a 100% detection rate.
> What about random substring detection? What about retrieval for several substrings from different corpus entries
Our experiments in Fig 5b experimented with truncated paraphrases as queries. Below, we present results for other query types that the reviewer suggested. We adopt the same setup as 5b: PG19-BM25, 1% FPR:
Results in paper:
full length unperturbed query: 100%
full length DIPPER query: 98.2%
50% trunc DIPPER query: 72.6%
New results:
50% random unperturbed substring: 86.2%
50% of two different unperturbed queries concat: 94.7%
50% random DIPPER substring: 68.8%
50% of two different DIPPER queries concat: 56.1%
Overall, we see that unperturbed random substrings (from single or multiple generations) can still be detected quite easily. However, adding DIPPER paraphrasing on top of that reduces accuracy.
> How large is the Par3 dataset?… fine tuning compute?
Our processed Par3 dataset has 6.3M pairs (which we will open source). We noticed convergence on held-out novels within 12 hours on 64 cloud TPUv3 chips.
> … how retrieval would interact with regulations that include the "right-to-be-forgotten", like GDPR. Could a user ask the company to delete their generations, to prevent detection? Or should it be argued that detection is sufficient cause to stop deletion?
We agree with the reviewer that AI-generated text detection is a sufficient cause to temporarily override deletion requests [2]. We believe that AI-generated text detection could fall under the following GDPR guidelines for overrides: (1) “freedom of expression and information”,(2) “establishment of a legal defense or in the exercise of other legal claims”, and possibly (3) “comply with a legal ruling or obligation” in the future. As an example of this, while OpenAI allows users to delete their chat history [3], they retain them for 30 days and can review it if required to monitor for abuse. We also agree with the reviewer that differential privacy or scrubbing sensitive attributes can mitigate the privacy issue to some extent. We will mention this in the next version, and also move our discussion on data privacy from Appendix B.1.4 to the main body as requested by the reviewer.
> I found the different FPR for DetectGPT in Table 1 confusing
We agree with this concern about our presentation. We will clarify this in the next version.
[1] https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text
[2] https://gdpr.eu/right-to-be-forgotten
[3] https://openai.com/blog/new-ways-to-manage-your-data-in-chatgpt
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thank you for the additional clarification and experiments, I have no further questions! | Rebuttal 1:
Rebuttal: We are very grateful to the reviewers for their detailed feedback. While we address each reviewer’s questions in the individual rebuttals, we use this “global rebuttal” to address concerns shared by multiple reviewers.
We thank the reviewers for supporting the three contributions in our paper:
* a novel discourse paraphrasing model and its open-sourcing
* a comprehensive set of effective paraphrasing attacks on modern AI-generated text detectors
* a novel defense using information retrieval and extensive discussion of its limitations.
Many of the concerns center around our third contribution of retrieval-based detection. These concerns can be categorized as:
* issues of scalability (storage, compute, accuracy)
* limitation to closed-source LLMs
* semantic collisions
Below, we address each in detail.
**Scalability of retrieval - storage**: We estimate ChatGPT’s outputs to take 5TB space monthly (similar to a personal portable hard-disk) via the following calculations. ChatGPT currently gets about 2B monthly visits [3]. Assuming an average response length of 500 tokens per session, this corresponds to 1 trillion tokens. Similar in size to LLaMA’s training data, this needs 5TB space [2]. However, 5TB is a small amount of storage compared to the industrial scale of information retrieval. For example, the Google Search index is over 100,000TB and has 100B+ pages [1]. Major LLM service providers (like Google, OpenAI) already have complex storage infrastructure to facilitate this defense. Additionally, it’s likely that they already store their model outputs for future RLHF purposes.
**Scalability of retrieval - compute**: Our retrieval experiments, conducted on a 14-core CPU (similar to a Macbook Pro), took 1 second per retrieval on a 15M sized corpus. Extrapolating to a corpus of ChatGPT’s monthly usage (100x) would need 100 seconds/retrieval on a Macbook. However, this is fully parallelizable, and can make use of GPUs (Google searches 100B+ entries in < 1 sec). Moreover, efficient similarity search has powerful libraries like FAISS available. For comparison, ChatGPT itself takes 10 seconds/response, possibly using a powerful 8-GPU A100 server [4]. Major LLM providers have massive compute clusters, and we believe the computational requirement of retrieval is much lower than hosting LLMs in the first place, which these providers are already adept at. Moreover, our proposed ideas in B.2 can further reduce compute costs.
**Scalability of retrieval - accuracy on larger databases**: Our experiments were conducted on the RankGen training set [7], which is the largest publicly available database of AI-generated text that we are aware of (15M generations each in four domains). Besides this, there are a few other datasets of AI-generated text such as GPT4All (809K) and ShareGPT (350K generations). While these datasets are much smaller than ours, we will experiment with them in the next version to add more diversity to our results. We note that it is extremely expensive and time consuming to create a corpus of AI-generated text from scratch: at a cost of `$`0.001 per 500-word response, collecting a billion ChatGPT outputs would cost $1M and take a long time to collect due to rate limits. Hence, a billion-scale experiment is likely only possible with Google/OpenAI’s private database.
Overall, we are optimistic about our scaling plots (Fig 5a), and see just a 0.8% drop moving from a 1M to 10M database (PG19-BM25). We emphasize that BM25 is a basic retriever, and is not optimized on our task. Information retrieval literature has many powerful retrievers, and we have suggested a dense retrieval mechanism in B.2 which can be optimized on the underlying retrieval corpus. Moreover, retrieval can easily be used in tandem with other detectors like watermarking. Finally we note that our paper is the first proof-of-concept that shows a retrieval-based detector could work, and we anticipate future work to build upon it.
**Retrieval is limited to closed-source LLMs**: We agree with this and will add it to our limitations. However, besides Meta, all major LLM providers (OpenAI, Google, Anthropic, Microsoft, Cohere) operate their LLMs behind closed APIs. It’s also important to note that watermarking, the most promising alternative to retrieval, also has this limitation. Since watermarks are added during decoding rather than into the model weights, users of open LLMs are free to generate text without watermarks. While other alternatives (DetectGPT, classifiers) don’t suffer from this issue, our paper shows that they either have low accuracy, or are extremely vulnerable to paraphrasing. In fact, OpenAI recently took down their classifier due to low accuracy [5].
**Semantic collisions**: Reviewers raised a concern about our retrieval database saturating with entries having similar semantics (especially for popular topics), which will harm detection at scale. In response, we note (and will update our paper to include) that:
* Like other detectors, retrieval works best on longer sequences (Fig 5b). Long generations exponentially increase the likelihood of semantic divergences between pairs of entries.
* Retrieval compares the input against the top-1 match, not top-k. For false-positive inputs on popular topics, the top-k entries *together* are more likely to cover input semantics (recall) rather than top-1 (precision).
* The most effective retrievers use a combination of neural semantic encoders and token overlap scores [6]. We also show this, BM25 beats P-SP at detection. BM25 is not fully semantic driven: it uses TF-IDF token overlap.
* The retrieval accuracy for unperturbed AI-generated text is always 100%, just like exact match searches in Google Search. Retrieval is also effective on substrings of unperturbed text (see rebuttal to KFgu).
[1] tinyurl.com/ggsdb
[2] tinyurl.com/fbllama
[3] tinyurl.com/chatnyp
[4] tinyurl.com/chatgpu
[5] tinyurl.com/oaicls
[6] tinyurl.com/beirev
[7] tinyurl.com/rrkgn | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper first demonstrates the vulnerability of existing
AI-generated text detectors to paraphrases, and then proposes a
retrieval-based method to alleviate the issue. For the first
experiment, the authors trained a paragraph-level paraphrase
generation system, called DIPPER, by fine-tuning an existing
text-to-text model, i.e., T5-XXL. To this end, the authors compiled a
dataset comprising pairs of paragraphs with two indication parameters
to control lexical diversity and content reordering. The proposed
retrieval-based detector assumes the LLM API providers to save all
their generated contents and to offer an interface for access the
database. The authors experimented with a corpus of 15 million such
examples, and shows that this approach is more robust to paraphrased
texts compared to existing detectors.
Strengths: - A paragraph-level paraphraser with the recipe to build it.
- Demonstration of the vulnerability of existing AI-generated text
detectors to paraphrases.
- A simple but effective retrieval-based method for robust
AI-generated text detection.
Weaknesses: - The underlying assumption of the proposed method is not likely to be
realistic. We can call LLM API providers for their ethics, but
there is no means to enforce them to conform to this approach.
Beside the providers, nowadays there are also increasing number of
publicly available strong LLMs, and we cannot regulate personal uses
of LLMs and AI-generated texts. I acknowledge the discussion in
Appendix B.1, but nothing is discussed in the main paper.
[Through rebuttal, the authors explained some obligations of LLM
providers and promised to include its summary in the main paper.]
- Scalability of the proposed approach is not sufficiently evaluated.
ll.297-298 states "a popular LLM API may serve millions of queries a
day" and it is likely to be true, given ChatGPT acquired more than
100 million users in the first two months and more than 13 million
unique users a day. Compared to this, the experiment in this paper
uses up to only 15 million AI-generated texts, it is unclear whether
the proposed method is feasible (memory and time in addition to
accuracy) with billions or trillions of such text. I acknowledge
the discussion in Appendix B.2, but nothing is discussed in the main
paper.
[The response reports on 100 seconds per retrieval at the client side
but this is misleading since the data store and retrieval operations
must locate on the server (LLM provider) side. However, I
understand that the speed would not matter given the strong facility
at LLM providers and high parallelizability of the retrieval task.
The experiment with 15 million instance sounds not realistic but I
understand that we should accepted this if the authors are not in
giant tech.]
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - In l.127, $L$ is defined as "unigram token overlap" but which of $p$
and $q$ is used to determine the denominator? [harmonic mean]
- Input sequences in l.132 and l.180 contains ordinary tokens:
"lexical", "order", "=", and ",". Did the authors use them without
escaping as "<p>" and "</p>" ? If so, wouldn't it distort the
embeddings for these tokens? [not escaped, following convention.]
- How many training instances were used for obtaining DIPPER?
[6.3 million pairs, which should be included in the main paper.]
- I'd like to suggest to tidy the layout. For instance, Tables 1, 2
and Figure 5 appear in the previous page of their first mention in
the main text; Figure 3 shows a table; the mention of Figure 6
appears before Figures 4 and 5; Figure 6 is embedded in an
irrelevant paragraph. [the authors promised to do so.]
- l.264: $L$ has already been used in l.127.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes. The authors describe the limitations in Appendix B and ethical considerations in Appendix A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed and thoughtful feedback! We thank the reviewer for supporting our contributions on paragraph-level paraphrasing, attacking AI-generated text detectors, and introducing a retrieval-based defense mechanism.
The reviewer voiced concerns about the practicality of the proposed retrieval-based detection algorithm, some of which the reviewer noted were discussed in our Appendix (and which we will summarize in the main body of the next version). While we address these concerns below, we want to emphasize that retrieval-based detection is only one of our three contributions. Besides retrieval, our contributions include: 1) a novel paragraph-level paraphraser which we will open-source (data + models) for the community to use; 2) a thorough experimental study attacking five AI-generated detectors on outputs from three diverse large language models on two real-world tasks. We will fully open-source the code and datasets for reproducibility.
> We can call LLM API providers for their ethics, but there is no means to enforce them to conform to this approach (retrieval-based detection)
We believe that LLM API providers may actually be enforced or even incentivized to store their model-generated outputs:
* There is a substantial push from both the US and European governments to regulate companies to make their AI-generated text/images detectable. For instance, two weeks ago, Google, OpenAI and Meta made voluntary commitments to watermark their AI-generated content [1]. Similarly, the European government recently pushed for rulings to make AI-generated content detectable [2].
* Companies are already storing their model-generated outputs. For instance, both ChatGPT and Bard are storing chat histories to help improve their products with RLHF training. Bard stores chat history for 18 months by default [6], and OpenAI stores the history for 1 month even if users choose to opt-out [5].
* In a hypothetical scenario where there is a lawsuit about the origin of some malicious AI-generated content, maintaining a database of previously generated responses could be a reliable method to prove innocence, given its strong performance over competing AI-generated text detectors.
> it is unclear whether the proposed method is feasible (memory and time in addition to accuracy) with billions or trillions of such text
In the “Global Rebuttal”, we provide a detailed analysis estimating these requirements for a ChatGPT-scale database. In summary, we estimate them to be relatively small. Specifically, we estimate that a month’s database of ChatGPT outputs needs 5TB of storage space per month and requires just 100 seconds per retrieval on a CPU-only Macbook Pro. This is trivial compared to the scale at which Google Search (100,000+ TB index) and ChatGPT (10-15 seconds on a powerful 8x A100 GPU server) are currently operating. Moreover, major LLM API providers (like OpenAI, Google) already have the infrastructure in-place to support services like Google Search and ChatGPT at scale.
In terms of accuracy, we are optimistic looking at our scaling curves (Figure 5a shows just 0.8% drop from 1M to 10M in BM25). Our experiments already use the largest publicly available AI-generated dataset (to the best of knowledge). Collecting a billion-scale dataset with ChatGPT will cost $1M, making the experiment only possible using OpenAI or Google’s private databases.
> nowadays there are also an increasing number of publicly available strong LLMs, and we cannot regulate personal uses of LLMs and AI-generated texts
This is a valid concern, and we address this in the “Global Rebuttal”. Overall, we agree with the reviewer that our detector is restricted to closed-source LLMs. However, most major LLM providers are hosting their LLMs behind closed-source APIs. Additionally, watermarking, the most promising alternative to retrieval, also suffers from the same limitation and cannot be done on open LLMs. Other detectors either perform very poorly or are brittle against paraphrasing attacks.
> I acknowledge the discussion in Appendix B.1, but nothing is discussed in the main paper…. I acknowledge the discussion in Appendix B.2, but nothing is discussed in the main paper.
We will add a summary of limitations of retrieval-based detection to the main body of the paper.
> In L127, L is defined as "unigram token overlap" but which of p/q and is used to determine the denominator?
In L127 we use an F1 score to determine unigram token overlap between p and q. So this considers both p and q in the denominator before the harmonic mean.
> L132, L180 contain ordinary tokens: "lexical", "order", "=", and ",". Did the authors use them without escaping as "<p>" and "</p>"?
We did not escape “lexical” / “order” in order to leverage the pre-trained embeddings for these words. We don’t think this should have any major effect on training, because we are using pre-trained 11B-sized deep transformers models which are extremely compositional in nature. While embeddings may get slightly distorted, we expect higher layers to compose information from consecutive tokens to infer the differences between control tokens and content tokens. We also note that unescaped control tokens are standard practice in T5 fine-tuning (Fig 1 in [1]).
> How many training instances were used for obtaining DIPPER?
We used a dataset of 6.3M pairs (which we will open source). We noticed convergence on held-out novels within 12 hours on 64 cloud TPUv3 chips.
> I'd like to suggest to tidy the layout
We agree with these presentation issues, and will clean them up in the next version.
[1] https://tinyurl.com/usgovt-ai
[2] https://tinyurl.com/eugovt-ai
[3] https://arxiv.org/abs/1910.10683
[4] https://openai.com/blog/new-ways-to-manage-your-data-in-chatgpt
[5] https://support.google.com/bard/answer/13594961
[6] https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text
---
Rebuttal Comment 1.1:
Title: Read the responses
Comment: Thank you for your response. I'll update my review accordingly. | null | null | null | null | null | null |
DeWave: Discrete Encoding of EEG Waves for EEG to Text Translation | Accept (spotlight) | Summary: This paper proposes an EEG to text model that feeds into the raw EEG signals and predicts the corresponding words or long sentences. The model is optimized in two stages: (i) matching EEG to Text by contrastive learning with codex quantization; (ii) fintune the BART model for EEG embedding decoding to text. The model is evaluated on ZuCo 1.0 and 2.0 datasets.
Strengths: 1. The EEG to text application sounds interesting and challenging. The proposed model architecture sounds reasonble by leveraging recent popular CLIP-style contrastive learning, signal quantization (from speech processing domain), and BART finetuning (LLM).
2. The paperr is easy to read, and the related works are well organized.
3. The performance on ZuCo datasets looks good.
Weaknesses: 1. The reviewer has concerns on the data volume for training the CLIP model (ZuCo dataset might not be large enough). It would be good if authors can comment/explain on this.
2. It would be good to add more experiments on "whether the discrete codex" module is useful? What if we directly use the raw continuous signal? How to better learn the discreate codex (the codex grouping in [1] can potentially improve the model)?
3. BART is a generally purpose pre-trained language model, which may not be suitable for this application. In the medical domain, [2] might be more suitable given the finetuned data might be limited.
4. More datasets and baseline models should be added to systematically and rigorously demonstrate the supriority of the proposed model.
[1] Baevski et al. Wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations.
[2] Lee et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: NA
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thoughtful feedback and recognize the significance of the highlighted concerns. Below, we address the identified weaknesses point by point:
1. **Data Volume Concerns for the CLIP Model**: The limited data scale problem has been a key challenge in the BCI area. The training data is limited compared to visual-language alignment CLIP addressing. This also leads to EEG-to-Text alignment remains unexplored compared to Vision-to-Text alignment. However, this work proves that introducing contrastive alignment on a limited data scale has a clear improvement and cross-modality alignment in the BCI area is feasible, which can be a good reference/encouragement for future works. Although the text corpus is limited, each sentence has EEG waves from 18/12 human subjects reading, this still leads to a large amount of EEG data. We utilize pre-trained language models as the decoder and pre-trained word2vec embedding as the alignment guidance for the EEG encoder, which alleviates the limited training data problem.
2. **Utility of the Discrete Codex Module**: This is a good point. For comparison directly using raw continuous signal, we have already reported in raw waves part in Tab. 1 in the main paper. Since the EEG-to-Text [1] is only designed for word-level eeg features. We re-implement this method and directly train it on raw waves to realize this comparison. For a fair comparison, both methods are using the same pre-trained BART decoder. We list the performance reported in Table 1 in the main paper (between line 213 to line 243) here for your reference.
| Method | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | ROUGE-R | ROUGE-P | ROUGE-F |
| ----------------------------------- | ------ | ------ | ------ | ------ | ------- | ------- | ------- |
| EEG-to-Text (directly on raw waves) | 13.07 | 5.78 | 2.55 | 1.10 | 15.22 | 18.08 | 16.36 |
| DeWave | 20.51 | 10.18 | 5.16 | 2.52 | 21.18 | 29.42 | 24.27 |
The performance suggests a clear improvement in using discrete codex over directly using continuous raw EEG waves.
Regarding the open question of grouping the discrete codex, we think given the current limited training data further splitting the codex into sub-groups might not have a positive effect. But it could be an interesting question in the future if there is sufficient data. It is also noted that, according to Fig. 5 of the main paper, simply increasing codex size will lead to a decrease in performance. In that case, we didn’t report these aspects in the main paper.
3. **Choice of BART over Domain-specific Models**: The choice of BART has three reasons: 1) as you mentioned, it is a general-purpose language model with normal language distributions. As ZuCo uses text corpus mostly from wiki pages in the general domain, not the medical domain, the pre-trained distribution of BART is closer to the ZuCo dataset. 2) BART pretraining process added noise tokens to the encoder when training the language decoder. This is more suitable for noisy encoded EEG embedding. 3) In order to control the same decoder model with previous works [1]. In that case, we could better illustrate the impact on the proposed discrete codex with contrastive alignment.
4. **Possibility of Additional Datasets and Baselines**: We very much like to have more data on brain signal-to-text translation. However, subject to the current situation as of the submission deadline, the ZuCo dataset is the only choice. We will actively track the new datasets. Also, we are recording our own data currently and will provide more details in our future work.
For baselines, the EEG-to-Text [1] baseline in our main paper is the only valid baseline currently. We've tried our best to adopt possible baselines from other domains for this task. We adopt Wav2Vec [2] from the speech recognition area and SCL from the brain sleeping stage recognition area and BENDR [3] from EEG self-supervised training area into translation tasks as our baselines. Yet, in this case, we think the current experiment is quite completely limited to existing works.
Thank the reviewer for the very good advice, we will actively track the baselines of brain decoding for the community in our future works.
Again, we thank the reviewer for their constructive feedback and hope our rebuttal addresses your concern.
[1] Zhenghailong Wang, et al, **Open Vocabulary Electroencephalography-to-Text Decoding and Zero-Shot Sentiment Classification** AAAI 2022.
[2] Baevski, Alexei, et al. **Wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations.** NeurIPS 2020: 12449-12460.
[3] Kostas, Demetres, et al. **BENDR: Using Transformers and a Contrastive Self-Supervised Learning Task to Learn From Massive Amounts of EEG Data.** Frontiers in Human Neuroscience 15 (2021).
---
Rebuttal Comment 1.1:
Title: Thanks for your rebuttal.
Comment: We thank the authors for detailed rebuttal and additional experiments. I still think the small dataset might not be sufficient to give solid insights and conclusions on the BCI problem. I will keep my score. | Summary: This paper introduces a new method, “DeWave”, for decoding text strings from EEG data recorded from subjects while reading. This method seems to differ from earlier ones in two important and useful ways: (1) it uses a discrete “codebook” to represent the EEG data, which helps to control noise in the extremely noisy setting of EEG, and (2) it is also applied to continuous, rather than word-segmented, EEG data, more closely mimicking a real-world situation. The quantitative results show a modest boost over earlier methods in the word-aligned setting, and decent performance in the novel continuous setting.
Strengths: * This paper is the first I have seen that attempts to decode continuous language, i.e. without explicit word boundary markers, from EEG data. This is a big and important step! Decoding from word-aligned EEG snippets is clearly much easier but also much less interesting than this variant. I think pushing for this type of decoding is a great contribution of the current work.
* The use of a vector quantization approach to represent EEG data also seems like an important step. (However, I have not read every paper in this area, so I can’t say with 100% confidence that this is the first work to apply this approach to the current problem.)
* The method seems, overall, well-designed.
Weaknesses: * The paper is at several points confusing and difficult to follow or understand. For example, in section 3.2 I found it impossible to understand the sentence “The codex contains fewer time-wise properties which could alleviate the order mismatch between event markers (eye fixations) and language outputs.” (I’ll expand on this with specific queries under “Questions”, below.)
* Minor scientific issue: Eye movements were used to segment the EEG data into words in the segmented (traditional) condition, but were ignored for the continuous condition. However, eye movements are well known to cause HUGE transient deflections in EEG signals, to the point where much/most EEG data includes preprocessing that attempts to minimize their influence. I would imagine such preprocessing was done for this dataset (please correct me if I’m wrong), but even ideal eye movement removal cannot remove 100% of the related signals. So I think it could be important for the authors to consider (and perhaps even test) whether their continuous decoder is actually discovering latent eye movement signals.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * In section 3.2 under “Inference”, it would be very useful to state how temporal information is encoded, e.g. with a separate x for each word in the segmented condition, and for each 200 ms time period in the continuous condition.
* In section 3.3 under “Word-Level EEG Features…”, the preprocessing is not well explained. This section might rely on the reader knowing earlier work, but I found it difficult to understand what the 840-dimensional vector actually comprises.
* I found it very surprising and slightly worrying that the decoder seemed to be able to retrieve both exact proper nouns (“Kerouac” in Table 3, example 3) and approximate proper nouns (“Heroughs” for “Burroughs” in same). How is this possible? The contrastive semantic alignment loss, as I understand, cannot account for this, since it would operate on discrete tokens, unless those tokens also occurred in the training set. Is the discrete codex really capturing enough information about the constituent BPE tokens for those words to reconstruct them?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I think the most important limitation to note is that this method applies to EEG data collected while subjects actively read text, and cannot (currently) be applied to data collected while subjects merely think words. The authors do note this in their “Limitations” section, which is great.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your thorough review and constructive comments on our manuscript. We genuinely appreciate your insights and will address your concerns point by point below.
1. **In section 3.2 under “Inference”, it would be very useful to state how temporal information is encoded.**
- For the word level EEG features, since there are eye-tracking markers suggesting EEG wave fragments related to each word, we directly get the embedding sequence by slicing it into sequence {$x_p$}, $p$={$1,2,..M$}, where $p$ th embedding corresponding to each word.
- For the raw wave setting, as shown in Fig. 3, we use a wide convolutional kernel to slide through a certain stride length to sequentially slice raw waves into continuous embedding sequences {$x_p$}, $p$={$1,2,..N$}, where $p$ th embedding corresponding to continuous perception fields (take 200ms as an example).
Then we directly use the position embedding in the original transformer paper [1] for both settings. Given position (order $p$), the position embedding in dimension $i$ is calculated by $PE_{(p,2i)}=sin(p/10000^{2i/d_{model}}), \quad PE_{(p,2i+1)}=cos(p/10000^{2i/d_{model}})$.
As the calculated position embedding is with the same shape as the continuous embedding sequence $\{x_p\}$, the position embedding is directly added with each EEG embedding accordingly. The position embedding is intrinsically maintained in the sequence.
**Further question: how does codex benefit the feature noise due to time-wise properties**
When a human subject reads the same word, the EEG waves could be different, given in different sentences context, in a different order, and at different times, which leads to feature variance time-wise. The advances of discrete encoding are that it could minimize noise variances and intuitively map encoded semantic (might with noise) embedding corresponding to the same word into a stable discrete codex value representation for the decoder. This will benefit the decoder with a more stable feature representation with less time-wise variances.
We will significantly improve writing in Section 3.2 for better readability.
2. **Eye Movements and EEG Data:** Yes, you are right in stating pre-processing procedures the dataset used, preprocessing steps were indeed taken to minimize the influence of eye movements. Here [2], 9 EOG channels were used for artifact removal, and additional 14 channels lying mainly on the neck and face were discarded before data analysis. Fig. 1d in paper [2] shows a very good removal performance.
However, there may remain a small noisy component related to eye tracking as the artifact removal is not 100% accurate. Yet, according to both our correlation analysis experiments and experiments reported in EEG-Text [3]’s code, we have not observed a significant component related to eye movement.
Another thing is we use a quite deep transformer encoder to extract signals to semantic space, even if it has some remained eye-movement information, it will be jointly encoded into semantic embeddings. In that case, it could be hard to separate the impact from eye movement individually. However, we are keen to explore more properties of this point and keep updating it.
3. **section 3.3 under “Word-Level EEG Features...” and how the feature dimension 840 comes from:** Thank you very much for the writing suggestion to improve readability to wider audiences who are not familiar with earlier work and the dataset. The EEG waves are collected from the Biosemi-128 system, where 9 EOG channels were used for artifact removal, and additional 14 channels lying mainly on the neck and face were discarded. That led to 105 channels for EEG signals. For word-level features, the eye-tracking marker is used to slice EEG wave fragments according to each word. Then, both [2, 3] and our paper averaged the power statistical feature on 8 frequency bands. These bands are 'theta\_{1,2} (4-6 Hz, 6.5-8 Hz)', 'alpha\_{1,2} (8.5-10 Hz, 10.5-13 Hz)', 'beta\_{1,2} (13.5-18 Hz, 18.5-30 Hz)', 'gamma_{1,2} (30.5-40 Hz, 40-49.5 Hz)'. In that case, the feature shape after flattening and concatenating all statistical features would be $8\times 105=840$.
We will enhance this section by providing a step-by-step description of our preprocessing pipeline, ensuring it is self-contained and does not necessitate prior knowledge of earlier works [2, 3].
4. **Decoded example text:**
According to our experiments, the decoder has better prediction ability on nouns rather than verbs and adjectives. Our observation is that when human subject reading, it normally pays more attention to nouns (names) (which are likely to be more complicated, or even rarely seen) rather than common words. It leads to enriched input feature qualities for the model to learn.
Another point is the data is recorded when the human subject is reading the text. Considering the mentioned term “Burroughs and Kerouac” vs. “Heroughs and Kerouac”, it has similar pronunciation when reading on “-roughs” part. In that case, the learned distribution might be closer between these words (adjusted from word2vec).
Meanwhile, according to our statistical results on the ZuCo dataset, it contains 6828 total words and 300 sentences (1.0) and 390 sentences (2.0). Though the training and testing text corpus is different, these corpora are sampled from the same wiki paragraph. So it is true that these two names did appear in the training set. The current sampled example text is surprising, but we think this performance is totally under control and as we expected.
[1] Vaswani, Ashi et al, **Attention is all you need** , NeurIPS 2017
[2] Nora Hollenstein, et al, **ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading and Annotation.** LREC 2020: 138-146
[3] Zhenghailong Wang, et al, **Open Vocabulary Electroencephalography-to-Text Decoding and Zero-Shot Sentiment Classification** AAAI 2022
---
Rebuttal Comment 1.1:
Comment: Thank you for these updates and clarifications. I did not realize that the stimuli in the ZuCo datasets consisted of wikipedia excerpts. Does it worry you that the LLMs you used (BART, OPT, & Llama) all (almost certainly) included those wikipedia pages in their training sets?
---
Reply to Comment 1.1.1:
Comment: Thank you to the reviewer for the follow-up questions.
The authors do agree that we should be cautious when dealing with LLMs when bridging them with brain waves. The experimental results fit our expectations and are safe in ethics. The authors do not worry much about the point that LLMs included Wikipedia pages in their training set considering the following aspects.
1. The **primary objective** of this paper is to **develop a better brain encoding** that aligns more effectively with language model decoders. In our main experiments, we maintained the decoder consistent with baseline models, focusing our comparison on the quality of the learned encoding. With the same decoder, the introduced encoding surpasses previous top results by clear margins, particularly in the raw wave setting. This supports our contribution.
2. The experimental setting involves an open-vocabulary brain-to-text translation task. For open-vocabulary tasks, such as Visual Question Answering (VQA) or image captioning, a certain degree of text phase overlap between the training and test sets is permissible, especially as the training set size increases. Given the vastness of the training corpus for current Large Language Models (LLMs), most people agree that these modern auto-regressive models are learning a joint distribution that inherently encompasses 'concepts' rather than merely replicating the exact same phrases.
Additionally, there are 18 different human subjects reading the same text sample, which already creates a lot of **diversity** in the input feature. Directly decoding language from brain waves is a quite challenging task. If the encoded embeddings align well with the pre-trained language decoder, it then signifies a reduction in the modality gap between the two modalities, which could benefit follow-up works in the EEG encoding area.
3. The qualitative results also support this point. As evidenced by Table 3, the generated examples are not merely regurgitating Wikipedia excerpts. Rather, they appear to reconstruct the stimuli derived from human subjects during reading, even though the resulting sentences may lack fluency in their semantic coherence. If the decoder is merely reproducing previously encountered content, the text will exhibit greater fluency but diminished correlation. | Summary: The authors propose a new framework called DeWave, which integrates discrete encoding sequences with EEG-to-text translation tasks, using a quantized variational encoder and pre-trained language models. This approach overcomes the mismatch between eye fixations and spoken words and reduces interference from individual differences in EEG waves. The model outperforms previous baselines on the ZuCo Dataset, achieving improved BLEU-1 and Rouge-F scores. Importantly, this work is the first to enable translation of entire EEG signal periods without relying on word-level order markers like eye fixations.
Strengths: - The authors introduce a novel discrete codex encoding to EEG waves, which seems like a promising way of representing EEG signals.
- DeWave has promising results as it achieves state of the art performances on EEG to text translation.
Weaknesses: - Figure 5's caption reads "Ablation study on different codex sizes and perception fields (raw waves)," however, for the table on the left, the experiments are done on word level EEG features. It would be better if the table title on perception field can specify the feature type (raw waves) instead of the caption.
- In the abstract, the authors mention large language models such as ChatGPT, but experiment only on BART. An ablation study on other LLMs such as LLaMA, BERT, or RoBERTa can be enlightening.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The authors seem to have conducted the experiments separately for each patient or an averaged score over all patients. In [1], they mention "In spite of the high inter-subject variability in EEG data, it has been shown in previous research of machine learning applications (Foster et al., 2018; Hollenstein et al., 2019a), that averaging over the EEG features of all subjects yields results almost as good as the single best-performing subjects." Have the authors considered this method?
- I am aware that the ZuCo dataset has sentence level features. Additionally, I am aware of some works that have experimented on concatenated word level features [2]. Have the authors considered these different EEG feature types?
[1] Hollenstein N, Renggli C, Glaus B, Barrett M, Troendle M, Langer N, Zhang C. Decoding EEG Brain Activity for Multi-Modal Natural Language Processing. Front Hum Neurosci. 2021 Jul 13;15:659410. doi: 10.3389/fnhum.2021.659410. PMID: 34326723; PMCID: PMC8314009.
[2] Han, William, et al. “An Empirical Exploration of Cross-Domain Alignment between Language and Electroencephalogram.” ArXiv:2208.06348 [Cs, Q-Bio], 10 Aug. 2022, arxiv.org/abs/2208.06348.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We appreciate your thorough review and insightful feedback. We will address each of your comments and concerns below and also in our revised manuscript.
1. **Improve Figure 5’s title**: Thank you for your suggestion on slight confusion on the perception field graph. We will revise the figure title to ``Ablation Metrics on Perception Fields (on Raw Waves)” which ensures that the table title specifies the feature type (raw waves) rather than the current caption for clarity. This should provide better readability to the audience with less background knowledge.
2. **Experimentation on other LLMs:** We agree with the reviewer’s suggestion that further extending the discrete codex for large language models can be enlightening. The reason why we keep the main experiments using BART is that we want to control the decoder on the same scale compared to previous works. In that case, we can better illustrate that our improvement is from discrete coding but not from a more powerful decoder.
In fact, we did have applied a larger-scale ablation in the near past. We conducted further experiments by replacing the BART decoder with OPT and Llama V1. However, the performance improvement is not as large as we expected. Bridging brain activities with LLMs and AGI is an important area worth plenty of papers to explore. **For reasons of caution, we did not include** this experiment previously in the main paper.
However, we can report our previous experiments on these three models. Limited to our computing resources, we fine-tune the OPT-1.3B model and Llama-1 7B model with half-precision each for 3 epochs using PyTorch FSDP mode. The tokenized EEG waves are prompted into LLMs according to Mini-GPT4’s method [1] of dealing visual tokens.
| Source | Decoder | BLEU-1 | BLEU-3 | ROUGE-R | ROUGE-P | ROUGE-F |
| ------------------- | ------------------- | ------ | ------ | ------- | ------- | ------- |
| Word-level features | DeWave | 41.35 | 13.92 | 28.82 | 33.71 | 30.69 |
| Word-level features | DeWave + OPT 1.3B | 41.97 | 14.06 | 28.98 | 33.82 | 30.86 |
| Word-level features | DeWave + Llama-1 7B | 42.84 | 15.03 | 29.42 | 35.43 | 32.05 |
| | | | | | | |
| Raw Waves | DeWave | 20.51 | 5.16 | 21.18 | 29.42 | 24.27 |
| Raw Waves | DeWave + OPT 1.3B | 21.31 | 5.84 | 22.09 | 29.94 | 25.42 |
| Raw Waves | DeWave + Llama-1 7B | 22.05 | 6.03 | 22.45 | 30.01 | 26.08 |
3. **Averaging over the EEG features:** Considering potential practical use cases in the future, using averaged EEG waves might not be very valuable. This is because, in real-world scenarios, the translation from brainwaves to text is more likely to be performed on individual human subjects in real time. If our understanding is right, averaging waves across multiple subjects requires pre-collecting EEG waves offline and performing benchmark tests. Given intuition closer to realistic usage, we trained on a mixed dataset and tested on different single human subjects to better reflect real-world applicability.
4. **Different EEG feature types:** Yes, we considered this method previously. There are two reasons why we didn't use it.
- As the EEG-to-Text translation is conducted on long sentences, which is different from traditional classification tasks. The EEG feature fragments according to each word have large variation according to how many times the word appears in the sentence and how many times human subjects looks at that word when reading. In that case, concatenate features lead to higher computational consumption and instability in input features.
- We have drafted experiments in the earlier stage. According to our previous experiments, the performance of concatenate method is significantly lower than averaging word-level feature, which is used by [2] and our paper.
[1] Deyao Zhu et al, **MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models**
[2] Zhenghailong Wang, et al, **Open Vocabulary Electroencephalography-to-Text Decoding and Zero-Shot Sentiment Classification** AAAI 2022
---
Rebuttal Comment 1.1:
Comment: Thanks for the insightful rebuttal and clarification to my questions! I will raise my score to a 6. I wish the authors the best of luck. | Summary: The authors present an approach to decode language from EEG data. The proposed approach can work on both time-locked and raw data (i.e. without markers indicating when a word was read). The model is trained to (1) reconstruct its EEG input using a vector-quantized representation (learnable "codex") in a pretraining stage, (2) align the learned codex representations with word2vec embeddings of the corresponding words and (3) fine-tune the whole model end-to-end including a language model. Experiments on a text-EEG dataset are presented, showing the proposed approach outperforms existing baselines including other self-supervised learning objectives that do not use vector quantization on BLEU and ROUGE metrics. Ablation studies on codex size and windowing parameters are presented, along with an analysis of cross-subject performance. Finally, examples of EEG-to-text decoding are also presented.
Strengths: Originality: The proposed approach combining vector quantized representation learning and "freeform" text generation from decoded EEG is novel.
Quality: The submission appears technically sound, with claims supported by benchmark comparisons and ablation studies.
Clarity: The paper is mostly clear, though some details are hard to understand from the text (see questions).
Significance: I believe these results pave the way to better text decoding from EEG data. This is important for the field of brain decoding given a lot of the work on brain-to-text has been using intracranial modalities or fMRI, which are very expensive and dramatically more constraining than surface EEG.
Weaknesses: The main weakness of this submission in my opinion is that several elements remained unclear after reading the text and taking a look at the provided code. For instance, I did not understand how the alignment was achieved between raw EEG and words/text (Question 2), how the language model was fine-tuned (Question 3) and how DeWave was combined with existing SSL approached (Question 4).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Section 3.3, lines 136-141: The description in this paragraph is unclear to me. What is meant by "statistical result of four frequency bands"? I assume that summary statistics of the power in the four different bands were used? Also, the phrase "different fragments may have different wavelengths" is confusing as I believe the same frequency bands are used for each EEG fragment. More generally, why is a "handcrafted feature" approach used for the word-level features, while a similar convolutional encoder as is used for the "raw wave" could instead be trained end-to-end? The ablation of Table 4 shows that pretraining the word-level models improves performance bit a small amount, and I wonder if performance would further improve with a learned tokenization.
2. I do not understand how the sequence of embeddings obtained from the raw EEG were aligned with word2vec representations (Section 3.3, second paragraph). In the word-level experiments, my understanding is that each word was embedded with word2vec, and that the resulting embedding was used to align the EEG-based discrete representations with the loss of Eq.4. However in the "raw" case, how was word2vec applied to the text, i.e. was it still applied at the word level?
3. What loss is used to fine-tune the whole system end-to-end including the language model?
4. Table 1 shows that combining DeWave with SCL further improves performance over the base DeWave model. How were these two approaches combined, and did the SCL approach had to be modified?
5. The approach of [2] should be written "wav2vec" instead of "wave2vec" in the text and in Figure 2, Table 1, etc.
6. There is missing information about the dataset (number of subjects, length of data collection, description of the text corpus, etc.).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your comprehensive review, appreciation, and insightful inquiries. We will address your concern step by step below and improve in the final version.
1. **Regarding Section 3.3, lines 136-141 and beyond:** This paragraph is to introduce how we construct the word-level EEG features. The method is the same as it is in the ZuCo dataset [1] and EEG-to-Text [2] for fair comparison.
- "statistical result of four frequency bands": When the ZuCo dataset preprocesses word-level features, it averages power statistical features on all corresponding EEG fragments of four main bands (two sub-bands for each main band), including 'theta\_{1,2} (4-6 Hz, 6.5-8 Hz)', 'alpha\_{1,2} (8.5-10 Hz, 10.5-13 Hz)', 'beta\_{1,2} (13.5-18 Hz, 18.5-30 Hz)', 'gamma_{1,2} (30.5-40 Hz, 40-49.5 Hz)'.
- By "different fragments may have different wavelengths", we meant to emphasize the sliced EEG fragments have **varying time lengths of waves**, not wavelengths. The varying time lengths of EEG waves caused by the eye-tracking markers length variation arises because human subjects spend different amounts of time reading different words. Apologize for the typo here, we will correct “wavelengths” to the “different lengths of EEG wave samples” throughout the whole paper.
- Comparing word-level handcraft features with raw waves. Though both are hard tasks, translation on word-level features is easier as :
- It already has an EEG feature for word alignment suggested by eye-tracking markers. Instead, the raw wave setting requires the transformer to learn the alignment through self-supervised pre-training as illustrated in Fig.3.
- Less noise: Each word-level feature is consistent in each sentence as it averaging feature regardless of how many times the same word appears in the sentence. The performance gap in Table 1 supports this point.
- Learned tokenization: Thank you for a good point. We think the projection layer is taking a similar role, where it `tokenizes’ handcraft word-level features into transformer space. Tab. 4 suggests that learned tokenization has improvements in the word-level setting.
2. **Alignment on raw waves:** For the raw waves setting, there are no eye-tracking markers to slice EEG waves accordingly. As shown in Fig. 3, we use a wide convolutional kernel to slide on waves to sequentially slice raw waves time-wise. Then, position embeddings are added into each sliced embedding to maintain the order information. The alignment happens extrinsically sequence-wise (sentence-wise). We simultaneously optimize both the discrete coding and alignment similar to CLIP as described in Eq.4, where alignment matrix $s_{i,j}$ which contains all possible alignment is calculated between each encoded EEG embedding and text embedding in a sequence pair. The transformer attention encoders are expected to learn the alignment $s_{i,j}$ intrinsically given position embedding and output supervision. The current code only contains E2E training, discrete codex, and evaluation, code for contrastive alignment pretraining will be available after the anonymous era.
3. **End-to-end fine-tuning:** As described between lines 128-132, we maximize the log-likelihood on language output similar to most of the language decoder end-to-end training as $L=-log(P(W |zq (X ))$ for both word-level EEG features and raw waves. The only difference to Eq. 2 is that the discrete codex is fixed during fine-tuning. Thank you very much for the notice, we will make these points more clear for better readability.
4. **How were DeWave and SCL combined?** Here the +SCL meant we apply the additional contrastive loss proposed in their paper and combine it with a coefficient when training the transformer encoder before discrete codex. As SCL are RNN-based model, we didn’t use their model structure. However, this point is not related to our core contribution. We will clarify this part in our final version and provide a section to introduce details in supplementary details.
5. **Typo Correction:** Thank you for pointing out the problem regarding "wav2vec". We will correct "wave2vec" to "wav2vec" throughout the text, Fig. 2, Tab. 1, and other potential problems.
6. **More information about the Dataset:** We use the ZuCo (including both 1.0, 2.0) dataset [1] for our main experiments which is exactly the same as [2]. ZuCo stands for Zurich Cognitive Language Processing Corpus (ZuCo), including raw and preprocessed eye-tracking and electroencephalography (EEG) data. The data is collected by letting human subjects read the given text corpus and simultaneously recording both recording the eye-tracking signal and EEG waves. The collection device is the Biosemi-128 system. After denoising, it provides 105 of 128 channels for down-streaming tasks. The 1.0 data is collected from 12 subjects while the 2.0 data is collected from 18 subjects.
Regarding the text corpus of the ZuCo dataset, these were sourced from a diverse set of textual genres to ensure a wide variety of syntactic structures and word frequencies, which includes: 1) Wikipedia articles 2) movie reviews 3) BNC (British National Corpus).
We further apply a statistical analysis on the sentences we utilized from the dataset, and reported below.
| Feature | ZuCo 1.0 Natural Reading | ZuCo 2.0 Natural Reading |
| ------------ | :----------------------: | :----------------------: |
| sentences | 300 | 390 |
| sent. length | 21.3 (±10.6) | 19.6 (±8.8) |
| total words | 6386 | 6828 |
| word length | 6.7 | 4.9 |
[1] Nora Hollenstein, et al, **ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading and Annotation.** LREC 2020
[2] Zhenghailong Wang, et al, **Open Vocabulary Electroencephalography-to-Text Decoding and Zero-Shot Sentiment Classification** AAAI 2022
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their answers to my questions. Some short follow-ups:
Q1: Got it, thank you for clarifying this. May I suggest to replace "different lengths of EEG wave samples" by something like "different EEG window sizes"? I believe "window" is clearer than "wave samples" in this context.
Q2: In an EEG-text pair $(i,j)$, what is $j$ and how is it matched to $i$? My current understanding is that $j$ is a single word. In that case, and knowing that there is no eye tracking information, how is a "corresponding pair" defined? Couldn't $j$ be (correctly) matched to multiple EEG embeddings?
---
Reply to Comment 1.1.1:
Comment: Thank you to the reviewer for the questions.
**A1:** Thank you very much for the suggestion. We agree with the point of changing "different lengths of EEG wave samples" to "different EEG window sizes" will make it clearer. We will revise this phase accordingly throughout the paper.
**A2:** Yes, your understanding is right. Here the term $i$ denotes $i$-th EEG embedding in the EEG embedding sequence with length $N$. The term $j$ denotes $j$-th word2vec embedding in word sequence with length $N$. The EEG embedding sequences are extracted from EEG raw waves without eye-tracking information and in chronological order by 1) going through conv kernels sliding (transfer continuous raw waves into the raw EEG embedding sequence), 2) transformer encoder (encode the raw sequence into the organized sequence), 3) discrete codex (acquire a compared stable embedding value).
We calculate the similarity matrix with shape $N\times N$ between these two sequences, each with length $N$. The $s_{i,j}$ denotes the similarity between $i$-th EEG embedding and $j$-th word embedding. Here, the "corresponding pair" $(\hat{i},\hat{j})$ denotes the **diagonal element of that matrix**, where we have $\hat{i}=\hat{j}$. The contrastive object is to maximize the similarity of the "corresponding pair" and push away others. This means we expect the transformer encoder to learn an organized embedding sequence (compared to a raw sequence) given position embeddings and intrinsic correlations inside the raw sequence. And we expect the learned EEG sequence aligned with the text sequence order.
This setting is based on the **prior** that the human subject is reading in **chronological order**. According to our observation from both ZuCo data and our data collection experiments, when human subjects are reading/silent speeching, the fixation order is mostly in time-wise order. Thus, we trust the transformer encoder to organize the raw sequence and achieve pre-order alignment (to diagonal).
The mentioned question of $j$-th word embedding may potentially be matching to multiple embeddings in the EEG sequence: If we understand your question right, this situation exists. Considering a text token sequence ["the", "apple", "is", "on", "the", "table"], there will be two "the" respectively in $0$-th and $4$-th positions. However, according to our analysis of the ZuCo dataset, these multiple matches are mostly meaningless articles like "a", and "the", or verbs such as "is" and "are" in abundance. Considering the fact that when people read the sentences, these articles are merely paid attention to, with very short fixation time, we treat these small proportions of multiple matches as noise. In our implementation, we mask out the duplicated word locations ($s_{0,4}, s_{4,0}$ for this example) outside diagonal locations to prevent conflicts. According to experimental results, this method works well when training on large corpora on the ZuCo dataset and first realizing freeform translation on raw waves.
We will update more details on the incoming code release, and also make this point clearer in writing accordingly in the paper and supplementary materials. | Rebuttal 1:
Rebuttal: Dear chairs and reviewers,
We express our profound gratitude for the comprehensive feedback and comments on our manuscript. This paper receives **Accept, Weakly Accept, Borderline Accept, and Weakly Accept** during the review period. We are excited about the consensus among the reviewers regarding the novelty and potential impact of our work.
This paper introduces vector quantized representation learning and contrastive alignment between EEG waves and natural languages. The experiments of DeWave showcased state-of-the-art performances on word-level EEG-to-text translation. Additionally, DeWave stands as the pioneering effort to achieve language decoding directly from raw waves. This advancement is a significant step towards real-world applications, which alleviate the dependence on pre-known eye-tracking markers to segment EEG waves accordingly. Meanwhile, the discrete codex encoding introduced for EEG waves also provided a new choice for follow-up works that require EEG wave vectorization.
Building upon the above, we have acted on the reviewers' feedback to enhance the clarity and thoroughness of our paper. Please refer to the rebuttal for each specific review for step-by-step clarification on potential unclear technology inquiries below. Additionally, we have undertaken a thorough refinement of the manuscript to rectify typos and augment its readability for a broader audience. We are committed to ensuring the manuscript's excellence and its readiness for publication.
The related code provided in the review area will be refined and be public after the anonymous phase.
To conclude, we earnestly believe that our approach can make a good contribution to the EEG decoding realm. The feedback has encouraged us to perfect our manuscript to ensure it can contribute to the community as a solid publication.
Best wishes,
Paper Authors | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Online learning of long-range dependencies | Accept (poster) | Summary: The paper looks at the problem of online learning in RNNs from a perspective of exact gradient computation: it looks for network architectures for which exact gradients can be computed. It uses the recently proposed LRU (Orvieto et. al., 2023) to show that with linear and diagonal structure in recurrent dependencies, one can tractably compute the exact gradients online for a single layer network. For multi-layer networks, this simplified structure combined with truncation of gradient signal (only considering current time step) results in an approximate gradient.
The paper performs two sets of experiments. The first set examines the difference in the approximate gradient computed using the proposed method with the exact gradient. The second set of experiments focuses on learning tasks that require capturing long range dependencies. In both set of experiments the proposed method closes the gap to BPTT compared to the baselines used in the paper.
Strengths: ### Originality
The paper combines two known ideas, independent linear recurrent modules (LRU from Orvieto et. al., 2023) and gradient sparsification for online learning (Menick et. al., 2021), and presents them in new light.
### Quality
1. The paper is technically sound, has carefully designed experiments that support the claims well.
2. claims supported by theory and/or experiments
### Clarity
1. The paper is well organized and clear for most part. There are some parts that can be improved (see weaknesses).
### Significance
1. The well designed experiments make the results presented in the paper useful for the community.
Weaknesses: ### Quality
1. Along with other claims that are well supported, the authors claim that they have pushed the standard for what is possible through online recurrent learning. However, I believe that the current experiments, which are limited to simple long range dependency tasks, are not sufficient for such a grand claim. Specifically, since all the experiments in the paper use synthetic data, it will be nice, for example, to include an experiment on real world data like small scale language modeling and compare the results to an LSTM language model, and discuss the limitations of the work in terms of performance on real world data that has complex dependency structure that is not simple long-range structure. Such an experiment will show the limitations of online learning and help guide further research on the topic.
2. Why does table 1 not contain sparse approximation method like SnAp-n applied to dense RNNs?
### Clarity
Following are some mistakes/typos in the paper that can throw a reader off.
1. There is a mix-up in the symbols in equation 1, and line 103. I think $x_t$ should be replaced with $h_t$ in Eq 1, and line 103 should state that $u_t$ is the input at time $t$.
2. Line 109-111: I think you mean to say *absence of temporal non-linearity* in line 110.
Following are some suggestions that could improve the paper:
1. While the authors provide through discussion for single-layer networks (Sec 3.1), the more important case of multi-layer networks does not get the same amount of care. The entire discussion in section 3.2 happens without the use of detailed expressions. Figure 1 does provide some support to the discussion but I find it to be insufficient. At the least, the authors should accompany the pictorial description in Fig 1 with expression for $\delta$ at each layer. This will help with the discussion of various points that come up in the experiment section 4.1 that centers around effect of dept. Specifically, an expression for $\delta$ at a lower layer it will help the point made in line 201-205, that is later discussed in section 4.1 around line 245.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ### Typos and suggestions
1. Line 130: Using them for complex differentiation allows using calculus rules that are similar to those for real valued functions.
2. Line 167: "Those" as a vague reference here might cause confusion. It might be better to say $e^\lambda$ and $e^B$ explicitly.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Along with other claims that are well supported, the authors claim that they have pushed the standard for what is possible through online recurrent learning. However, I believe that the current experiments, which are limited to simple long range dependency tasks, are not sufficient for such a grand claim. Specifically, since all the experiments in the paper use synthetic data, it will be nice, for example, to include an experiment on real world data like small scale language modeling and compare the results to an LSTM language model, and discuss the limitations of the work in terms of performance on real world data that has complex dependency structure that is not simple long-range structure. Such an experiment will show the limitations of online learning and help guide further research on the topic.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your useful comments and thoughts. We reply below to each point raised individually.
> Along with other claims that are well supported, the authors claim that they have pushed the standard for what is possible through online recurrent learning. However, I believe that the current experiments, which are limited to simple long range dependency tasks, are not sufficient for such a grand claim. Specifically, since all the experiments in the paper use synthetic data, it will be nice, for example, to include an experiment on real world data like small scale language modeling and compare the results to an LSTM language model, and discuss the limitations of the work in terms of performance on real world data that has complex dependency structure that is not simple long-range structure. Such an experiment will show the limitations of online learning and help guide further research on the topic.
We would like to note that approximate RTRL research is still in a proof-of-concept phase, and prior work has only considered at best only very small-scale language modeling experiments. These would be the experiments within our reach. The problem with this is that it seems likely that such small-scale experiments would not reveal the ability to model long-range dependencies; our impression based on prior work is that most of the capacity is used to learn dominant short-range dependencies with small RNN models. This may be one of the reasons why the SnAp authors found surprisingly good results on small-scale character-level modeling already with reservoir (not learned) GRU networks, and why SnAp almost matched BPTT. We would also note that the LRA is not entirely toyish, and a challenging problem suite; for example, transformer models are not strong performers there, and they require large attention spans on the order of hundreds of tokens [24].
We have however experimented with single- and multilayer GRU networks on the copy task (cf. [global response](https://openreview.net/forum?id=Wa1GGPqjUn¬eId=HB3E5CrCGS)), trained with approximate RTRL (SnAp combined with spatial backpropagation to allow training deep networks, as we did for our models), BPTT, TBPTT and spatial backpropagation, and found that such networks are greatly outperformed by LRU networks when trained online. Interestingly, only BPTT can leverage the additional capacity of deep GRU networks, whereas online approximate RTRL stagnates at some value corresponding to a 1-layer GRU. This again shows the power of approximate online RTRL when paired with networks with independent recurrent modules. We are currently running experiments with both single- and multilayer GRU networks on sequential CIFAR and we will add those results to the next version of the paper.
> Why does table 1 not contain sparse approximation method like SnAp-n applied to dense RNNs?
We now trained dense linear RNNs using BPTT, truncated BPTT, spatial backpropagation and our hybrid learning rule (which combines SnAp-like forward sensitivity propagation with spatial backpropagation) on sequential CIFAR, cf. [global response](https://openreview.net/forum?id=Wa1GGPqjUn¬eId=HB3E5CrCGS) results pdf. Our online-learned LRUs greatly outperform online-learned dense linear RNNs. The next version of the paper will include dense linear RNN results for the remaining LRA tasks considered here, ListOps and IMDB.
> While the authors provide through discussion for single-layer networks (Sec 3.1), the more important case of multi-layer networks does not get the same amount of care. The entire discussion in section 3.2 happens without the use of detailed expressions. Figure 1 does provide some support to the discussion but I find it to be insufficient. At the least, the authors should accompany the pictorial description in Fig 1 with expression for $\delta$ at each layer. This will help with the discussion of various points that come up in the experiment section 4.1 that centers around effect of dept. Specifically, an expression for $\delta$ at a lower layer it will help the point made in line 201-205, that is later discussed in section 4.1 around line 245.
Thank you for this suggestion with which we agree. We will expand section 3.2 with expressions for the spatially backpropagated error $\delta$ and improve Fig. 1 following your suggestion.
> There is a mix-up in the symbols in equation 1, and line 103. I think $x_t$ should be replaced with $h_t$ in Eq 1, and line 103 should state that $u_t$ is the input at time $t$.
> Line 109-111: I think you mean to say absence of temporal non-linearity in line 110.Line 130: Using them for complex differentiation allows using calculus rules that are similar to those for real valued functions.Line 167: "Those" as a vague reference here might cause confusion. It might be better to say $e^\lambda$ and $e^\gamma$ explicitly.
Thank you for catching these mistakes, which we will correct in the next version of the paper.
> Along with other claims that are well supported, the authors claim that they have pushed the standard for what is possible through online recurrent learning. However, I believe that the current experiments, which are limited to simple long range dependency tasks, are not sufficient for such a grand claim.
We did not wish to oversell our claims. We will downtone our discussion presenting our results as promising while clarifying that they are still limited to relatively small-scale image (sequential CIFAR), text (IMDB) classification and symbol manipulation (ListOps) tasks.
We remain fully available to respond to any further questions you may have during the discussion period.
---
Rebuttal Comment 1.1:
Title: Thank you for the thorough response
Comment: I thank the authors for their thorough response and for providing additional results. With the additional information, I now think that the paper can have significant impact, at least on RTRL community. I've increased my score by one point to reflect this. | Summary: The authors provide an online learning algorithm for linear recurrent units [23]. They take advantage of the fact that each unit of the LRU is an ‘independent recurrent module’ and thus RTRL for each LRU layer simplifies substantially in this case and becomes tractable. They test on some long-range dependency tasks.
Strengths: - The authors have combined insights from recent developments like LRUs being as good as RNNs while being easier to train, and approximations of RTRL for online local learning as in biology.
- They demonstrate good results on some long-range dependency tasks as compared to RNNs.
Weaknesses: - The learning algorithm proposed seems to me just e-prop [13] applied to the LRU. Indeed, e-prop also takes into account self-recurrence of each unit and 1-step lateral recurrence. With LRU, since there is no lateral recurrence, e-prop becomes exact within one LRU layer.
- Their algorithm still relies on spatial backprop between layers, so this is still not local in space.
- The authors claim that “numerous biological implementations and alternatives for spatial backpropagation have been proposed [e.g., 50–59], while essentially none exist yet for backpropagation-through-time [60].”. But various approximations for BPTT have also been proposed like [13].
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - The authors should study the relation with e-prop and inform if there are any differences with it. To me, their algorithm is the same.
- In Fig. 3, I expect that BP is BPTT, and Spat. Is just the spatial BPTT without temporal dependencies? If not, was BPTT not used?
Minor:
- Mistakes in equation (1)! Cf. appendix A. I’ve classified this as minor, but it does not inspire confidence that the very first equation has many mistakes.
- L304: inference -> inference
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your useful review. We reply to your specific questions and concerns individually below.
> The learning algorithm proposed seems to me just e-prop [13] applied to the LRU. Indeed, e-prop also takes into account self-recurrence of each unit and 1-step lateral recurrence. With LRU, since there is no lateral recurrence, e-prop becomes exact within one LRU layer. [...] The authors should study the relation with e-prop and inform if there are any differences with it. To me, their algorithm is the same.
Our learning rule is indeed very closely related to e-prop and SnAp. We will update our manuscript with a detailed discussion, cf. [global response](https://openreview.net/forum?id=Wa1GGPqjUn¬eId=HB3E5CrCGS). Besides being able to learn complex neurons, our rule handles multilayer recurrent neural networks (RNNs) differently, something that we failed to point out previously. More concretely, e-prop and SnAp-1 do not send detailed spatial credit assignment information in deep RNN models such as the LRU. At best it only broadcasts a global error backwards through skip connections, and in the worst case for networks without skip connections it leads to identically zero updates to deep hidden recurrent neurons (that do not directly influence the loss). By combining forward sensitivity propagation with spatially backpropagated errors, our hybrid learning rule delivers detailed learning signals even to hidden recurrent neurons, while retaining the cost of a single inference pass.
> Their algorithm still relies on spatial backprop between layers, so this is still not local in space.
Yes. In this manuscript, we opted to present the architecture and algorithm in its purest form and attack the challenging LRA suite. We will add a comment to the next version of the paper, explicitly suggesting that it would be interesting to explore replacing spatial backpropagation by approximations such as feedback alignment (Lillicrap et al., 2015) in future work.
> The authors claim that “numerous biological implementations and alternatives for spatial backpropagation have been proposed [e.g., 50–59], while essentially none exist yet for backpropagation-through-time [60].”. But various approximations for BPTT have also been proposed like [13].
We originally intended to emphasize that the rule only needs spatial backpropagation and not reverse-mode temporal backpropagation, in the strict sense (not encompassing forward-mode differentiation), but the remark ended up being confusing. We will remove it from the discussion in the next version of the paper.
> In Fig. 3, I expect that BP is BPTT, and Spat. Is just the spatial BPTT without temporal dependencies? If not, was BPTT not used?
Yes, BP is BPTT, and Spat. is spatial backpropagation. This will be clarified.
> Minor: - Mistakes in equation (1)! Cf. appendix A. I’ve classified this as minor, but it does not inspire confidence that the very first equation has many mistakes.L304: inference -> inference
Thank you for spotting these typos, which will be corrected in the next version of the paper.
We are more than happy to answer additional questions that you may have during the discussion period. We note that we ran additional experiments triggered by other reviews, that we collected in the [global response](https://openreview.net/forum?id=Wa1GGPqjUn¬eId=HB3E5CrCGS).
---
Rebuttal Comment 1.1:
Title: read rebuttal
Comment: I have read the authors' rebuttal. They have accepted the issues pointed out by others and me and agree to elaborate on / clarify these issues. I maintain my rating of 6: weak accept.
Minor point: Instead of feedback alignment (Lillicrap et al 2015) for local spatial backprop, look at Akrout et al 2019 and the earlier references therein -- far before ML discovered feedback alignment with some fanfare, computational neuroscientists were just learning the feedback weights to align correctly which works much better than feedback alignment! | Summary: The authors show that applying an online learning algorithm to independent recurrent modules of linear recurrent units, drastically reduces the algorithm’s computational and memory requirements. They then show numerically that the algorithm’s gradient approximation for multi-layer networks is close to the “real gradient” (as provided by BPTT) and that the algorithm performs well across a range of tasks despite the approximation and using decoupled recurrent units.
Strengths: The authors take a promising path towards effective and efficient gradient-based online learning of recurrent neural networks: Instead of deriving a new approximation of BPTT or RTRL, they choose a network architecture that drastically reduces the computational and memory requirements when deriving online gradient updates for them. The paper provides a comprehensive overview and discusses relations and differences to previous work. The authors compare their proposed network architecture to other architectures and learning algorithms on the copy task. They further evaluate their architecture for three different learning algorithms on the sCIFAR and ListOps task. The paper is very well structured, written and accessible. The algorithm is derived in great detail, including a very helpful primer on complex differentiation, which is very helpful for understanding the research question, results and potential limitations. The figures are generally clear and accessible.
Weaknesses: - While I want to strongly emphasise that I think it is awesome that the authors flag that the SnAp-1 algorithm is reducing to the proposed algorithm when being applied to the proposed network architecture; this opens questions about the contributions made by the paper. In how far is the proposed algorithm new? How is it different from SnAp-1? Maybe a list of contributions at the beginning of the paper could bring clarity? I also would like to flag here that I am not familiar with SnAp-1.
- The authors write that the “most remarkable result” is that their online learning algorithm significantly outperforms learning of networks with densely connected recurrent neurons on the copy task. Given that the copy task requires to memorise a sequence of patterns with i.i.d. sampled entries, is it really surprising that having no interference between hidden units is helpful?
- To show that the decoupled network architecture is performing reliably and well and is a promising alternative to coupled RNNs, in my opinion a comparisons on the more complex tasks, i.e., sCIFAR and ListOps, are missing (i.e. like the data in figure 3) .
Minor:
- Two typos in equation 1 (formula for yt)
- No description for panels A-D in caption of figure 2
- Maybe adding h-lines for 100% and 70% accuracies in figure 2?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Where is the high variance of the proposed algorithm in the ListOps task coming from?
- In order to make an online task out of sCIFAR, the authors provide the label at every timestep. How is target encoded / scaled over time? What is the influence on performance of providing the target at each time step?
- In lines 61-62 the authors write “Finally, in Section 4, we analyse our algorithm and relevant baselines [...] with sequence lengths up to over 1000 steps” – what experiment is this referring to and how are 1000 steps required?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: A discussion of limitations of the work is missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your useful review. We reply to your specific questions and concerns individually below.
> While I want to strongly emphasise that I think it is awesome that the authors flag that the SnAp-1 algorithm is reducing to the proposed algorithm when being applied to the proposed network architecture; this opens questions about the contributions made by the paper. In how far is the proposed algorithm new? How is it different from SnAp-1? Maybe a list of contributions at the beginning of the paper could bring clarity? I also would like to flag here that I am not familiar with SnAp-1.
Thank you for raising this question, which made us realize that we were not precise enough in our comparison with SnAp-1. We will update our manuscript with a detailed discussion, cf. [global response](https://openreview.net/forum?id=Wa1GGPqjUn¬eId=HB3E5CrCGS). Besides being able to learn complex neurons, our rule handles multilayer recurrent neural networks (RNNs) differently, something that we failed to point out previously. More concretely, SnAp-1 does not send detailed spatial credit assignment information in deep RNN models such as the LRU. At best it only broadcasts a global error backwards through skip connections, and in the worst case for networks without skip connections it leads to identically zero updates to deep hidden recurrent neurons (that do not directly influence the loss). Getting a learning signal for such neurons requires raising $k$ in SnAp-$k$, which comes at additional computational and memory costs. In contrast to SnAp, by combining forward sensitivity propagation with spatially backpropagated errors, our hybrid learning rule delivers detailed learning signals even to hidden recurrent neurons, while retaining the cost of a single inference pass.
> The authors write that the “most remarkable result” is that their online learning algorithm significantly outperforms learning of networks with densely connected recurrent neurons on the copy task. Given that the copy task requires to memorise a sequence of patterns with i.i.d. sampled entries, is it really surprising that having no interference between hidden units is helpful?
>
The independent recurrent module design is indeed well-suited for the copy task. However, we would like to emphasize that (1) our learning rule is still much better than truncated and spatial backpropagation applied to the same architecture (cf. Fig. 2 E/F), and (2) for this task and architecture (at this width), performance still improves with depth (cf. Fig. 2 A/B) when training with our learning rule. Together these two points show that our rule can do deep spatiotemporal credit assignment.
> To show that the decoupled network architecture is performing reliably and well and is a promising alternative to coupled RNNs, in my opinion a comparisons on the more complex tasks, i.e., sCIFAR and ListOps, are missing (i.e. like the data in figure 3) .
>
We now trained dense linear RNNs using BPTT, truncated BPTT, spatial backpropagation and our hybrid learning rule (which combines SnAp-like forward sensitivity propagation with spatial backpropagation) on sequential CIFAR, cf. [global response](https://openreview.net/forum?id=Wa1GGPqjUn¬eId=HB3E5CrCGS) results pdf). Our online-learned LRUs greatly outperform online-learned dense linear RNNs. The next version of the paper will include dense linear RNN results for the remaining LRA tasks considered here, ListOps and IMDB.
> Minor:Two typos in equation 1 (formula for yt)No description for panels A-D in caption of figure 2Maybe adding h-lines for 100% and 70% accuracies in figure 2?
Thank you for catching the typos and for the helpful figure suggestions that we will take in.
> Where is the high variance of the proposed algorithm in the ListOps task coming from?
After our bug fix (cf. [global response](https://openreview.net/forum?id=Wa1GGPqjUn¬eId=HB3E5CrCGS)), variance is now much smaller (0.68% for our learning rule, vs. 0.17% for BPTT, 0.59% for truncated BPTT and 0.27% for spatial BP).
> In order to make an online task out of sCIFAR, the authors provide the label at every timestep. How is target encoded / scaled over time? What is the influence on performance of providing the target at each time step?
>
The one-hot encoded target is given at every time step and the total loss is the average of all instantaneous losses. We haven’t run any experiments in which targets are given more rarely.
> In lines 61-62 the authors write “Finally, in Section 4, we analyse our algorithm and relevant baselines [...] with sequence lengths up to over 1000 steps” – what experiment is this referring to and how are 1000 steps required?
>
The lengths of the sequences we consider are 1024 in sCIFAR, 2048 in ListOps, and 4096 in IMDB.
We hope that these clarifications together with the additional results triggered by other reviewers (cf. [global response](https://openreview.net/forum?id=Wa1GGPqjUn¬eId=HB3E5CrCGS)) help you see our work in a more positive light. We are very happy to address any further questions that may arise during the discussion period.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their thorough reply.
In acknowledgement of the added comparisons to other algorithms on the sCIFAR, IMDB and ListOPS datasets, the overall quality of the work and presentation - but still wanting to acknowledging the, in my humble opinion, "only" moderate-to-high impact of the work, I have now revised my overall rating to a 6. | Summary: The authors introduce a forward-only approach to gradient computation in linear recurrent layers. The approach yields exact gradients for a single layer and approximate gradients for multilayer networks. They show that their approach yields more accurate gradients and results in more successful learning than alternative approximations that enable forward-only gradient computation in recurrent layers, and they illustrate how their method is particularly adapted to the structure of linear recurrent layers. They show that their approach yields test performance similar to that of backpropagation through time for a multilayer LRU-based model on two tasks in the long-range arena (LRA) benchmark.
Strengths: The context, relevant background, and main contribution are presented clearly. The contribution itself is technically insightful and leverages a view specific to linear, complex-valued recurrent units. The topic of efficient learning in RNNs is well-motivated and contributes to a growing literature on linear recurrent parameterizations for sequence models. The experiments, particularly the illustrations on the copy task, are well-documented and show an insightful framing of both the performance advantages and the anticipated challenges of the proposed method.
Weaknesses: - While the illustration of the method on the method on the copy task is informative, the broader empirical evaluation is limited. Apart from the copy task results, the entire basis for the claim that the algorithmic contribution enables learning of long-range dependencies is a small set of results on a subset of the LRA benchmark. The authors do not disclose why only a subset of LRA is used. The results are difficult to contextualize as it’s not clear what values indicate successful learning of the long-range dependencies in the data. Other comparative evaluations, e.g. in terms of computational or memory costs, are absent. This contribution would be impactful even without state-of-the-art performance along any of these dimensions, but a lack of context is harder to overcome.
- The paper relies heavily on the LRU of Orvieto et al. (2023) and appears to contain some overlap in content with that work (e.g. Fig 1 left is nearly identical to Fig 1 left in Orvieto et al.). At some points, for example $\S$4.2, the evaluation and analysis appears to be at least as much about the LRU itself as the algorithmic contribution of this submission. The authors could improve this work by more clearly delineating the present contribution from that of Orvieto et al. (2023).
- The authors spend half of the discussion arguing for the potential importance of their work for neuroscience, which seems somewhat implausible and at odds with a relatively tight architectural and algorithmic focus up to that point (save for a brief aside on lines 170-171). The similarities in terms of locality, complex values, and modularity seem mostly superficial in light of the larger differences separating forward-only learning of LRUs from computational models of spiking neurons, to say nothing of actual, biological networks of neurons. If this analogy is important, it deserves more careful development in the main body of the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can the authors clarify the evidence for learning of long-range dependence and offer some additional context in line with the first point above? Do *any* of the results in Table 1 suggest that the model has learned long-range dependencies? Can the authors share their reasons for limiting evaluation to a subset of the LRA benchmark?
- What is the relevance of the two paragraphs spanning lines 270-287 with respect to the contribution of the present paper? They seem to be largely focused on an (important) detail for initialization of linear RNN weights. Could this finding be interpreted as evidence that the comparisons are performed suboptimally, and other initialization schemes should have been explored?
- Typically, “online” learning refers to contexts in which data continuously streams from a source and low-cost updates are made in real time, after which an observation is not revisited. Do the experiments here follow this setup, or are multiple forward passes over the data required? If the latter, what is the advantage in practice of this method over BPTT?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There is no direct potential for negative social impact. Potential limitations for the method and results are either discussed or covered above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive and useful criticism. We reply point by point below.
> The authors spend half of the discussion arguing for the potential importance of their work for neuroscience, which seems somewhat implausible and at odds with a relatively tight architectural and algorithmic focus up to that point (save for a brief aside on lines 170-171). The similarities in terms of locality, complex values, and modularity seem mostly superficial in light of the larger differences separating forward-only learning of LRUs from computational models of spiking neurons, to say nothing of actual, biological networks of neurons. If this analogy is important, it deserves more careful development in the main body of the paper.
While we find our results promising as a starting point for more biologically realistic models, we agree that the discussion balance pended too much towards neuroscience. We will keep our neuroscience discussion to a single paragraph in the next version of the paper, which will read:
*"We conclude by noting that modularity, the overarching principle behind our approach, is at the very heart of the influential columnar hypothesis in neuroscience (Mountcastle, 1957). This hypothesis states that the architecture of the neocortex is modular, with the cortical column as an elementary (or canonical, cf. Douglas et al., 1989) building block one level of abstraction above neurons. We thus speculate that modularity could be a key neural network design principle discovered by evolution, that considerably simplifies the temporal credit assignment problem. This is in line with our finding that a modular architecture enables learning complicated temporal dependencies through simple local temporal credit assignment mechanisms, letting spatial backpropagation take care of assigning credit over the network hierarchy. Our findings provide a starting point for understanding how the brain deals with the fundamental problem of learning the temporal structure behind its sensory inputs."*
> What is the relevance of the two paragraphs spanning lines 270-287 with respect to the contribution of the present paper? They seem to be largely focused on an (important) detail for initialization of linear RNN weights. Could this finding be interpreted as evidence that the comparisons are performed suboptimally, and other initialization schemes should have been explored?
We also agree that too much space was allocated to this detail and we will remove it in the next version of our paper. Those considerations were a consequence of the initialization bug of the $D$ matrix we had in our code, c.f. the [global answer](https://openreview.net/forum?id=Wa1GGPqjUn¬eId=HB3E5CrCGS) for more details, and do not hold anymore.
> Can the authors clarify the evidence for learning of long-range dependence and offer some additional context in line with the first point above? Do any of the results in Table 1 suggest that the model has learned long-range dependencies?
We will discuss in the next version of the paper the required attention span analysis provided in the original LRA paper [24]. This analysis shows that transformer models need to attend to past inputs on the order of hundreds for the tasks we considered. Our online-learned models greatly outperform transformers, which suggests that long-range dependencies are being captured. While based only on these results we cannot entirely rule out that our models could be making better use of shorter contexts, it seems unlikely that this is the case for LRUs; we will nonetheless add a disclaimer for this point.
> Can the authors share their reasons for limiting evaluation to a subset of the LRA benchmark?
We now ran an additional set of experiments on the IMDB LRA benchmark, where we observe the same overall pattern of results, with our method coming closer to BPTT than to spatial backpropagation in terms of performance (cf. rebuttal results pdf). We didn’t run experiments on the other datasets of the LRA benchmark. The gap between linear RNNs and LRUs is close to 0 in the retrieval dataset (see Table 8 in [24]) so differences between online learning methods are likely to be small. For the pathfinder tasks, we could not get more than chance level with BPTT and the online version of the loss, suggesting that further algorithmic / architecture developments are needed before being able to learn those tasks online. We will add a short comment explaining this rationale to the next version of the paper.
> Typically, “online” learning refers to contexts in which data continuously streams from a source and low-cost updates are made in real time, after which an observation is not revisited. Do the experiments here follow this setup, or are multiple forward passes over the data required? If the latter, what is the advantage in practice of this method over BPTT?
We indeed allow for multiple passes over a finite training set, as is customarily done in the approximate RTRL literature. We opted for this standard setup to make it easier to gauge how close our method can now get to conventional, offline gradient-based learning via BPTT, which is traditionally far better than approximate RTRL. We will clarify this rationale in the next version of the paper.
We are fully available to answer any further questions you may have during the discussion period. We note that we ran additional experiments triggered by other reviews, that we collected in the global response ([link](https://openreview.net/forum?id=Wa1GGPqjUn¬eId=HB3E5CrCGS)).
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their engagement and discussion. I have read their replies above and the discussions elsewhere, and together these appear to have led to some fruitful improvements or clarifications. Overall I feel that my original score of 6 is reasonable for this work. | Rebuttal 1:
Rebuttal: We thank the reviewers for the useful comments and questions. The points raised in the reviews led us to run several new experiments and to clarify certain aspects of our manuscript. We believe that these changes have significantly improved our paper. We summarize below the major changes and reply individually to each reviewer in separate threads.
**Detailed discussion of SnAp-1 and e-prop.** Thanks to the reviewers we realized that we did not explain in sufficient detail the relationship between SnAp-1 [15] and e-prop [13] and our learning rule. While both rules are definitely closely related to ours (and we will stress this accordingly), we will now explain the two innovations introduced in our paper: (1) the extension to the complex domain, and (2) our hybrid gradient computation strategy that leverages spatial backpropagation (together with forward sensitivity recursions) to assign credit over multiple recurrent layers. In particular, a discussion of (2) will be added to the next version of the paper. As we explain below, this innovation enables sending detailed spatial credit assignment information over networks with multiple layers.
For a network comprising a single layer of IRMs, SnAp-1, e-prop, our rule, and in fact exact RTRL, all become identical. However, the important multilayer case is handled differently. On the one hand, SnAp-$k$ uses the standard RTRL gradient decomposition $\nabla_\theta L=\sum_t \frac{\partial L_t}{\partial h_t}\frac{dh_t}{d\theta}$, carrying forward in time an approximation of $\frac{\mathrm{d} h_t}{\mathrm{d} \theta}$ that improves with increasing $k$. The partial derivative $\partial L_t / \partial h_t$ is identically zero for neurons that are not directly connected to the output loss; for hidden recurrent neurons $l$ layers away from the output to receive a learning signal, $k$ has to be at least $l+1$. Thus, for networks with deep recurrent neurons, SnAp-1 would only learn the last recurrent layer. Adding skip connections would only ameliorate this problem, as SnAp-1 would only broadcast a global error through that pathway alone. Such differences do not matter for the shallow networks the SnAp paper focuses on, but become important for us, as we move to deep networks. On the other hand, e-prop starts from the same decomposition as us, $\nabla_\theta L = \sum_t \frac{dL_t}{d h_t}\frac{dh_t}{d\theta}$, also noting that $\frac{d L_t}{dh_t}$ cannot be computed causally. The e-prop rule is then derived by approximating $\frac{d L_t}{dh_t}$ by $\frac{\partial L_t}{\partial h_t}$, which then leads to the same situation as in SnAp-1. We, instead, opted for approximating $\frac{d L_t}{dh_t}$ using spatial backpropagation, enabling credit assignment over multiple IRM layers, and therefore sending detailed errors to models with hidden, deep recurrent neurons. We will add this discussion to the next version of our paper.
**Linear RNN results on sequential CIFAR**. We ran a new set of experiments on sCIFAR, replacing LRUs by dense linear RNN layers, cf. pdf. We find that online-learned complex diagonal networks greatly outperform their dense counterparts.
**Addition of GRU baselines.** We further investigated parameter-matched (wide) single-layer GRU networks on the copy task, essentially the architecture used in the SnAp paper [15], as well as multilayer GRU networks, cf. pdf. We ran these experiments to investigate the impact of depth on approximate RTRL in another popular, powerful architecture studied in previous work [9,15]. To train such networks online, we used SnAp-1 for single-layer GRUs, and our hybrid combination of spatial backpropagation with forward sensitivities for multilayer GRUs, cf. point on SnAp-1/e-prop. We find that neither shallow nor multilayer GRUs match the performance of LRU networks trained with our online learning rule. Moreover, only BPTT can take advantage of multiple GRU layers, whereas online GRU learning stagnates at single-layer performance. This again confirms the importance of the IRM design motif in enabling accurate online gradient estimation. We are currently running similar experiments on the sCIFAR benchmark and will add the results to the next version of the paper.
**New IMDB LRA benchmark.** We ran one more benchmark from the LRA suite, IMDB, to confirm that our trend of results remains (cf. pdf).
**Overall improvement in results.** We noticed some bugs in our code and brought some additional improvements after the submission. We here summarize the changes we made and the impact they have on our results:
- We were initializing the $D$ matrix without proper normalization; we now normalize it. We observed that, thanks to this change, the explosion phenomenon in linear RNNs no longer appears.
- Beforehand, we ignored the normalization factor $\gamma$ in the backward pass. We fixed this.
- Previously, for all experiments in the LRA benchmark, the prediction for time step t was created by taking the softmax of the current encoding. We now change it to be the softmax of the cumulative mean of all previous encodings, and block gradients for all previous time steps to ensure that our learning rule remains causal. This way, we get closer to the non-causal mean-pooling usually used.
These changes led to an overall improvement in all methods, particularly the online & LRU ones, c.f. pdf.
Interestingly, we additionally observed that it is now possible to reach a 0 training loss on the copy task when training a 4-layer deep LRU network with our learning rule for 250 epochs. This goes against the result that uniformly-biased gradient descent doesn't converge to a global optimum (D’Aspremont, SIAM Journal on Optimization, 2008) and may suggest that the bias of our learning rule has some special structure.
We also retuned hyperparameters (learning rate and weight decay) by performing separate grid searches for every method; details will be shared in the next version of the supplement.
Pdf: /pdf/084ee8b5a2a4ab2b27b335188b4b79e21e5b0cbf.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces an online learning algorithm for recurrent neural networks, particularly targeting the learning of long-range dependencies. It builds upon the linear recurrent units and independent recurrent modules in multi-layer networks with complex-valued neural activities. The online update approach, originally proposed in the SnAp-1 algorithm for real values, is here extended to handle complex-valued neural activities the reduces the memory and computational requirements of training. As a result, this approach outperforms both spatial (online) backpropagation and prior approximate real-time recurrent learning approaches in copy task benchmark with sequence of 50 while having marginal improvement over spatial backpropagation for the long-range arena benchmarks.
Strengths: The proposed online learning algorithm effectively optimizes both single-layer and multi-layer recurrent neural network architectures. By transitioning from real-valued to complex-valued neural activities and employing a diagonal recurrent connectivity matrix, it maintains good performance on long-range temporal tasks and enhances online gradient estimation. It accomplishes exact online gradient computation within a single layer using only double the resources required for inference. In managing multi-layer linear recurrent units, proposed approach adeptly approximates backpropagated errors and augments hidden states, mitigating the memory-scaling issue inherent in real-time recurrent learning. Furthermore, it can be thought of a refinement to the SnAp-1 algorithm that can handle independent recurrent units online, offering robust theoretical guarantees despite the approximation used, and accurately computes gradients for all layers, thereby improving gradient alignment and boosting overall performance.
Weaknesses: * The proposed algorithm, while innovative, has some limitations acknowledged by the authors. Its approximation of the error variable, δ, can degrade over time, particularly when the neural activity values converge around 1, leading to a disregard of future error information. This approximation error is compounded when backpropagated through multiple layers, resulting in only partial error signal backpropagation. Hence, the algorithm's effectiveness can be reduced, particularly when managing complex dynamics across many layers.
* The novelty of the algorithm is also somewhat constrained, as it builds upon existing concepts of Linear Recurrent Units (LRUs) and online Recurrent Neural Network (RNN) training methodologies, adapting the mechanisms of the SnAp-1 algorithm to handle complex-valued entities in independent recurrent units.
* The proposed algorithm's performance, while impressive on tasks like the copy task, still lags significantly behind full backpropagation-through-time (BPTT) on longer sequence tasks such as the long-range arena benchmarks with a sequence length of 1000.
* Finally, the methodology employed for hyperparameter selection — using the hyperparameters from BPTT as a basis — isn't practically applicable in an online learning scenario where BPTT won't be available.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * How would the hyperparameters be optimized for the online learning setting where BPTT results are not available?
* There seems to be a significant drop in accuracy in LRA benchmarks compared to the BPTT. Does this mean that the proposed approach doesn’t scale well to larger sequences? If so, it would be good to discuss the typical sequence lengths that this approach would be ideal for.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: limitations of the works has been discussed briefly, but not the potential negative societal impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the useful questions and comments. We reply below to each of them.
> The novelty of the algorithm is also somewhat constrained, as it builds upon existing concepts of Linear Recurrent Units (LRUs) and online Recurrent Neural Network (RNN) training methodologies, adapting the mechanisms of the SnAp-1 algorithm to handle complex-valued entities in independent recurrent units.
Thank you for raising this point, which made us realize that we were not precise enough in our comparison with SnAp-1. We will update our manuscript with a detailed discussion, cf. global response [link](https://openreview.net/forum?id=Wa1GGPqjUn¬eId=HB3E5CrCGS). Besides being able to learn complex neurons, our rule handles multilayer recurrent neural networks (RNNs) differently, something that we failed to point out previously. More concretely, SnAp-1 does not send detailed spatial credit assignment information in deep RNN models such as the LRU. At best it only broadcasts a global error backwards through skip connections, and in the worst case for networks without skip connections it leads to identically zero updates to deep hidden recurrent neurons (that do not directly influence the loss). Getting a learning signal for such neurons requires raising $k$ in SnAp-$k$, which comes at additional computational and memory costs. In contrast to SnAp, by combining forward sensitivity propagation with spatially backpropagated errors, our hybrid learning rule delivers detailed learning signals even to hidden recurrent neurons, while retaining the cost of a single inference pass.
> The proposed algorithm's performance, while impressive on tasks like the copy task, still lags significantly behind full backpropagation-through-time (BPTT) on longer sequence tasks such as the long-range arena benchmarks with a sequence length of 1000. [...] There seems to be a significant drop in accuracy in LRA benchmarks compared to the BPTT. Does this mean that the proposed approach doesn’t scale well to larger sequences? If so, it would be good to discuss the typical sequence lengths that this approach would be ideal for.
We reran our experiments after fixing a number of bugs (detailed in the global response, [link](https://openreview.net/forum?id=Wa1GGPqjUn¬eId=HB3E5CrCGS)) and retuned hyperparameters for every method separately, no longer perturbing around the hyperparameters that were optimal for BPTT. This widened the gap between our algorithm and spatial backpropagation, in particular on the long-range arena benchmarks, cf. rebuttal results pdf. We note that these results are quite strong in absolute terms, greatly outperforming transformer models, which in turn already require attention spans on the order of hundreds of tokens for these tasks (cf. [24]).
Moreover, to give a better sense of the significance of our results, we performed new sequential CIFAR experiments now using dense stacked linear recurrent networks (the fully-connected counterpart of the diagonal LRU) trained online. These dense models are significantly outperformed by our online-learned diagonal LRU networks.
> Finally, the methodology employed for hyperparameter selection — using the hyperparameters from BPTT as a basis — isn't practically applicable in an online learning scenario where BPTT won't be available. [...] How would the hyperparameters be optimized for the online learning setting where BPTT results are not available?
While we fully agree with the reviewer that ultimately this question will need to be answered, we feel that it is too difficult of a challenge on its own, and out of the scope of the present study. We will add a remark to the experimental section of our paper emphasizing that our results should read as an upper bound obtained using hindsight knowledge of the best hyperparameters, as is customarily done in the approximate RTRL literature. We will further note that complementary techniques developed in the continual and broader online learning literature may be used to help tune hyperparameters in an online, adaptive fashion.
We are pleased to answer any further questions that you may have during the ensuing discussion period. We note that we ran additional experiments triggered by other reviews, that we collected in the global response ([here](https://openreview.net/forum?id=Wa1GGPqjUn¬eId=HB3E5CrCGS)). | null | null | null | null | null | null |
Necessary and Sufficient Conditions for Optimal Decision Trees using Dynamic Programming | Accept (poster) | Summary: The authors present a dynamic programming (DP) method for constructing optimal decision trees.
The method is more general than previous DP methods. The authors report extensive experiments
to compare with previous methods which including MIPS and DP methods.
Strengths: The paper is well-written and there is extensive supplementary material including code.
There appears to be a real contribution in the integration of diverse types of constraints
within the DP framework. The DP method is conclusively shown to be much faster than MIPS methods.
There is a welcome theoretical analysis of the problem which allows the authors to state weaker
sufficient conditions (than were previously known) for the DP method to work.
Weaknesses: My main (and only major) criticism of this paper is that the authors try to prove that their list
of conditions are not only sufficient but also necessary for the DP method to work.
I believe that these conditions are sufficient, but I have some doubts about necessity.
Firstly, in lines 193-194 you give a definition of a Markovian cost function:
it is a cost function that depends just on the state and the current branching decision.
In Appendix A (lines 46-47) you give the impression that a Markovian cost function can depend on
the 'history' of branching decisions, which seems to go against standard definitions (including your own)
of Markovian. Would the DP method still work if the cost depended, for example, on the order of the
branching decisions in the history? If so, then the claim that having a Markovian cost function is
a necessary condition for the DP method to work appears false.
Secondly, you give a definition of anti-monotonicity (Def. 4.5) which appears non-standard.
I would expect c to be anti-monotonic iff (c(Y) and X a part of Y implies c(X)) but you say
c is anti-monotonic iff (opt(Y) and X a part of Y implies c(X)). You could call this, for example,
anti-monotonicity on optimal solutions. A proof of necessity can only work with this weaker version
but Appendix A (lines 114-116) seems to be saying that the stronger version is necessary (which is wrong).
One obvious fix would be to weaken your claims by dropping the 'only if' parts of Prop. 4.3
and Theorem 4.6. This would leave a very solid paper. The other solution is to try and produce
a watertight proof of the 'only if' directions in the rebuttal.
There are also a few minor presentation improvements that can be made in terms of presentation:
- In last sentence of the abstract you should say something about the quality of the trees you find
(otherwise being faster is a vacuous piece of information).
- lines 166-170: This paragraph is confusing. One possible interpretation is that constraints
are never taken into account, which is surely not the case. It is not clear what 'building blocks' refers to.
- Definition 4.4 (line 209): you should avoid phrases such as 'and thus' in a definition.
This is a consequence of the definition not part of it. This could be stated after the definition.
- Definition 4.5: again the 'and thus v_1 \notin opt(\Theta_1,s_1)' is out of place in a definition.
It is implicit that Theta/Theta_i are the set of feasible solutions for s/s_i. It would be better to recall
within the definition.
- line 218: give an example of a non-anti-monotonic constraint.
- line 229: why 'street' instead of 'streed'?
- equation (10): 'else' should be 'otherwise'
- line 285: do you mean monotonic or anti-monotonic?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: (1) Is the 'only if' part of Prop. 4.3 correct with the standard definition of 'Markovian'?
(2) Is there a typo in lines 114-116 of Appendix A?
(3) When you say in line 266 'This Pareto front can then be used to find the decision tree with, e.g.,
the best F1-score', is the Pareto front of possibly exponential size?
(4) In lines 268-282 (Group fairness): Are the actual constraints you apply
stronger than demographic parity. Are you imposing that you have parity down each branch, which would be much stronger?
****ADDED AFTER REBUTTAL****
I appreciated the authors' rebuttal and replies to my comments. I am generally positive about this paper, but it is important that the authors make perfectly clear to the reader what the necessary and sufficient conditions are. I found the terms Markovian and anti-monotonicity confusing and the authors should consider using history-dependent Markovian and anit-monotonicity on optimal solutions, for example.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: There are no obvious negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive words about our work! In response to your questions and comments:
**Markovian**
1. "In Appendix A (lines 46-47) you give the impression that a Markovian cost function can depend on the 'history' of branching decisions, which seems to go against standard definitions (including your own) of Markovian"
The parents’ branching decisions can be included in the state, as explained in line 163-164. This is a common trick applied in MDPs. Such an MDP is called a _history dependent_ Markov Decision Process, which is equivalent to a normal Markov Decision Process (see, for example, Section 1.4.1 in Sigaud and Buffet, _Markov decision processes in artificial intelligence_, John Wiley & Sons, 2013).
2. "Would the DP method still work if the cost depended, for example, on the order of the branching decisions in the history? If so, then the claim that having a Markovian cost function is a necessary condition for the DP method to work appears false"
Yes, the DP method would still work. The order of the parents’ branching decisions could be recorded in the state. Given our answer at point 1, this does not violate our claim that a Markovian cost function is a necessary condition.
3. (Q1) "Is the 'only if' part of Prop. 4.3 correct with the standard definition of 'Markovian'?"
Points 1 and 2 show that the only if part of Prop 4.3 is correct and in accordance with the standard definition of Markovian.
**Anti-monotonicity**
4. "The definition of anti-monotonicity is non-standard."
We indeed only require anti-monotonicity on optimal solutions, in order to have a necessary condition. Anti-monotonicity on all solutions is sufficient, but not necessary. We will update the text to make this more clear.
5. "A proof of necessity can only work with this weaker version but Appendix A (lines 114-116) seems to be saying that the stronger version is necessary (which is wrong)."
Appendix A (lines 114-116) uses the weaker version, and not the stronger version. There is no typo in these lines (Q2), but we understand the confusion of our notation. We will update the text to make this more clear by replacing $\theta$ with $opt(\Theta)$ and $\theta_1$ with $opt(\Theta_1)$ in lines 114-116 in the appendix.
6. "line 285: do you mean monotonic or anti-monotonic?"
This should indeed be anti-monotonic. Thanks!
**Pareto front**
7. (Q3) "When you say in line 266 'This Pareto front can then be used to find the decision tree with, e.g., the best F1-score', is the Pareto front of possibly exponential size?"
Let $N_0$ and $N_1$ be the number of negative and positive instances in a dataset. The maximum size of the Pareto-front is $M = \min(N_0, N_1)$. This Pareto-front could for example have the values $\\{ (M-x, x) ~ | ~ x \in \\{0, 1, ..., M\\} \\}$.
**Demographic Parity**
8. (Q4) "In lines 268-282 (Group fairness): Are the actual constraints you apply stronger than demographic parity. Are you imposing that you have parity down each branch, which would be much stronger?"
We enforce the demographic parity on the whole tree only, and not on each branch individually.
Enforcing demographic parity on every branch would be an example of a non-anti-monotonic constraint: a subtree that exceeds a discrimination limit could be balanced out by another subtree with an opposite bias, yielding a combined tree that does not violate the demographic parity constraint.
**Pronunciation**
9. "Why 'street' instead of 'streed'?"
This is a play on words, since "t" and "d" are often pronounced the same (incorrectly).
**Other suggestions**
We also thank the reviewer for the other suggestions for improving the text. We will include the suggestions in our final version.
---
Rebuttal Comment 1.1:
Title: Reply to the authors' rebuttal
Comment: Dear authors,
Thank you for your detailed rebuttal.
The rebuttal makes sense and all my questions have been answered.
However, I am a little concerned that you have to make a change to an important claim in the paper (anti-monotonicity on optimal solutions not on all solutions) which it will not be possible to verify by another review.
---
Reply to Comment 1.1.1:
Comment: The changes we will make to our final text concerning anti-monotonicity will not change any important claim in our paper. These changes are only small changes to improve clarity and avoid misunderstanding, namely:
1. Addition of one sentence in the main text where we highlight that our definition of anti-monotonicity only requires anti-monotonicity of optimal solutions and that requiring all solutions to satisfy anti-monotonicity, as Nijssen and Fromont (2010) require, is not necessary.
2. We will update the notation in Appendix line 114-116 as specified to avoid misunderstanding.
Our definitions, theorems and proofs in our first version already used anti-monotonicity on optimal solutions only. Therefore, none of these changes affect our theorems, proofs and definitions. The changes also do not affect our experimental results. | Summary: The paper introduces STreeD, a novel dynamic programming (DP) framework designed for learning optimal decision trees. By expanding the range of solvable objectives and constraints, STreeD offers significant advancements in decision tree optimization. The authors also offer theoretical insights to aid in determining the solvability of specific optimal decision tree problems using DP. The effectiveness of STreeD is showcased through its successful application in various tasks, such as revenue maximization under capacity constraints, group fairness, and optimization for nonlinear classification metrics.
Strengths: The contributions of the paper include a new cost function that allows for more flexible optimization, a new framework for learning decision trees that can handle a wider range of objectives and constraints, and empirical results demonstrating the effectiveness of the approach on several tasks. Overall, this paper is well organized and easy to follow. Additionally, the claims made in the paper are well-supported by comprehensive experimental evaluations.
Weaknesses: - Regarding section 4.4, would it be possible for you to provide an example that illustrates a situation where the optimization task is non-separable?
- Regarding Figure 2, could you kindly provide a description of the relationship between the Remaining Gap at Time-out and the number of Trees Computed? The current plot exhibits an unusual pattern where the gap appears to increase as the number of computed trees increases, which appears unexpected.
- Concerning the final cost of a tree as expressed in Equation (4), could you please provide a precise explanation of the meaning of the term g(s, b(u))?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The limitations of this work are thoroughly discussed in the conclusion section. However, it is important to acknowledge that the current framework is limited to parallel splits only. In future investigations, exploring other types of splits, such as oblique splits, could be considered.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive words about our work! In response to your questions:
1. "Regarding section 4.4, would it be possible for you to provide an example that illustrates a situation where the optimization task is non-separable?"
An example of a problem for which we expect no efficient separable optimization task can be formulated is the optimization decision tree policies for Markov decision processes, for which Vos and Verwer present a MIP formulation (Vos & Verwer, arXiv:2301.13185, 2023). In this problem, the optimal policy in a subtree depends on the frequency of each Markov state being reached. However, the frequency of each state being reached is dependent on the policy, which depends on all decision variables in the rest of the tree. This makes it hard to see how an optimal solution for the current subtree can be found independently from the rest of the tree.
We will mention this example in the final version of our work.
2. "Regarding Figure 2, could you kindly provide a description of the relationship between the Remaining Gap at Time-out and the number of Trees Computed?"
The plot is a cumulative distribution plot with on the vertical axis the percentage of problem instances which were a) solved within the given runtime (left side of the plot), or b) solved up to a certain MIP gap (right side of the plot).
For example in Fig. 2b, our method finds the optimal tree for all problem instances within 20 seconds. The state-of-the-art MIP method Jo-PPG-MIP (Jo et al., 2021), finds the optimal solution for 50% of the problem instances within the time-out of 300 seconds. This means the MIP gap is 0% for 50% of the problem instances (middle of the plot). At timeout, 80% of the instances are solved with a MIP gap below 50% (right side of the plot). This means that at timeout 20% of the instances still had a remaining MIP gap higher than 50%: far from being solved.
We will add this clarification in the paper.
3. "Concerning the final cost of a tree as expressed in Equation (4), could you please provide a precise explanation of the meaning of the term g(s, b(u))?"
$b(u)$ is the branching feature of node $u$. $g(s, b(u))$ is the cost of branching on feature $b(u)$ in state $s$ | Summary: The paper a proposes a novel framework for constructing Dynamic Programming (DP) algorithms for learning decision trees on Separable objectives. Historically DP methods are among the fastest methods that build optimal decision trees, and generatlization to a wide class of objectives is a useful contribution.
Theoretical contribution includes a rigorous definition of Separable objective, and a main theorem that defined multiple properties that must be satisfied by the objective to be separable. A mathematical formulation of the DP recursive algorithm is given, that is guaranteed to work for any separable objective.
Detailed evaluation is done on multiple datasets and objectives, some of which were previously considered in the literature, and other problems settings are novel and suitable only to the proposed framework.
Strengths: - This is a strong generalization of existing DP frameworks for Optimal Decision Trees solving.
- According to the experiments, this paper may set a new state of the art in terms of being the most efficient and flexible DP solver for Optimal Decision Trees.
Weaknesses: - No major or minor weaknesses.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: -
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: It would be nice to add examples of non-separable objectives to outline the scope of applicability of the proposed framework; to quantify how frequently such objectives are encountered in real-world tasks and datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive words about our work!
An example of a problem for which we expect no efficient separable optimization task can be formulated is the optimization decision tree policies for Markov decision processes, for which Vos and Verwer present a MIP formulation (Vos & Verwer, arXiv:2301.13185, 2023). In this problem, the optimal policy in a subtree depends on the frequency of each Markov state being reached. However, the frequency of each state being reached is dependent on the policy, which depends on all decision variables in the rest of the tree. This makes it hard to see how an optimal solution for the current subtree can be found independently from the rest of the tree.
We will mention this example in the final version of our work.
---
Rebuttal Comment 1.1:
Comment: I would like to thank Authors for their response and thorough examples in the rebuttals that showcase the generality and applicability of the proposed framework, which confirms my original evaluation score. | Summary: The paper investigates the conditions under which an optimal binary decision tree problem can be formulated as a dynamic programming (DP) problem, proposing the so-called *STreeD* framework. More specifically, the text establishes a general concept of separability for the objectives and constraints of DP-representable optimization problems. This concept requires state transitions to be order-preserving (for optimality) and anti-monotonic (for feasibility). The authors demonstrate that these properties hold for non-trivial decision tree constraints and evaluate their approach across four application domains, wherein it outperforms general-purpose solvers.
Strengths: + Very well written and rigorously formalized.
+ Framework expands the class of optimization models that can be addressed via DP in an intuitive way.
The paper contributes to the recent and growing body of work on optimization models for training decision trees. I appreciated that the paper is nicely written and formalized, proposing a more intuitive framework to verify if a binary decision tree can be more compactly written as a DP. The numerical results are also well designed and inform model choice between DP and more traditional mathematical programming methods, which is a valuable contribution. Finally, another interesting aspect is that the application to non-binary trees also seems theoretically feasible to me, as it would require a discrete action set as opposed to a binary one.
Weaknesses: - The originality of the generalization is unclear.
- More details needed to justify the benchmarks in the numerical section.
My major concern is that I struggle to understand the novelty of the work and its relationship to more fundamental DP theory. I believe this is an issue of presentation and framing of the work, which attempts to be somewhat broad in Section 4.
More precisely, any discrete optimization problem can conceptually be represented as a DP model given sufficient information to encode within a state. The effectiveness of the DP is directly correlated with the space complexity of the resulting state space, since models are typically solved via value enumeration or state recursion. The paper's main contribution is to show that one can leverage a more compact state representation (the dataset-depth pair) for many classes of objectives/constraints, i.e., that no additional state variables are required to enforce constraints or optimality.
However, these are fundamental questions of DP representability, and my understanding is that the paper could possibly be reinterpreting existing classical results in the area. For example, the notion of order-preserving and anti-monotonicity seems to be quite close to the concepts of monotonicity and $\thicksim$-congruence of the seminal work by Karp & Held,
Karp, Richard M., and Michael Held. "Finite-state processes and dynamic programming." SIAM Journal on Applied Mathematics 15.3 (1967): 693-718.
and references therein. The multiobjective-representable DP concept is also classical, e.g.,
Li, Duan, and Yacov Y. Haimes. "Extension of dynamic programming to nonseparable dynamic optimization problems." Computers & Mathematics with Applications 21.11-12 (1991): 51-56.
and the idea of the *merge* also shares some similarities with the concatenation concept by
Elmaghraby, Salah E. "The concept of “state” in discrete dynamic programming." Journal of Mathematical Analysis and Applications 29.3 (1970): 523-557.
Other related works include:
- Smith, Douglas R. Representation of discrete optimization problems by discrete dynamic programs. Naval Postgraduate School, 1980.
- Pollock, Stephen M., and Robert L. Smith. A formalism for dynamic programming. 1985.
Many of the conditions discussed in the works above establish when the optimal policy will be optimal and feasible, in that any state is actually capturing all the necessary information of all paths that reach it; that is, the "merge" deriving from the state transitions are sound.
My understanding is that this paper could possibly be offering a more intuitive way of checking whether these conditions hold for the special case of binary decision trees (in contrast to building more complex automata, for example). However, the paper lacks this discussion, which I believe is important because I am not sure if order-preserving is novel, or whether it can be derived from the classical body of work above.
*Other notes*
- It is not clear from the text if the benchmark methods correspond to the state of the art; e.g., there are nonlinear formulations (such as CP) that could also be used for training.
- The paper is very well written, but in my opinion, the term "pushing the limits of DP" is not appropriate because the paper does not exhaust all possible DP-based methodologies for these problems. I would suggest something along the lines of "Dynamic Programming Representability of Optimal Decision Trees" to highlight the theoretical contributions of the paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: It would be great if authors could comment on the relationship between Proposition 4.3 and Theorem 4.6, and existing works mentioned above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: No limitations were discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive words about our work! In response to your questions:
**Novelty**
1. "My major concern is that I struggle to understand the novelty of the work and its relationship to more fundamental DP theory."
We provide a tailored DP theory for decision trees. This has several benefits.
- We draw the deep connection between decision trees and DP. It seems that the community is not aware of this connection: papers are published at prime venues on specific problems that naturally fit within our framework, e.g., nonlinear metrics (Demirović & Stuckey, AAAI-21 ); group fairness (Van der Linden et al., NeurIPS-22) and regression (Zhang et al., AAAI-23).
- Given our specific setting, we show how order preservation is a necessary condition to satisfy the classic DP principle of optimality (Bellman, 1957). We also show that additivity, as required in previous general DP methods for optimal decision trees (Nijssen and Fromont, DM&KD-10) and Lin et al., ICML-20), is not necessary.
- As a consequence, this also allows us to exploit tailored algorithms for decision trees, namely a specialized algorithm for computing trees of depth-two, caching subtrees, and bounds. This third point is a minor point conceptually (we include it in the Appendix) but in practice it gives significant benefits.
- Lastly we also provide code for our framework which further supports the adoption of our general DP approach in the community.
In regards to specific questions about general DP theory in relation to the papers mentioned:
- The _monotonicity_ condition from Karp et al. indeed has similarities with our _order-preservation_ condition. Both our work and Karp et al. point out the similarity with the principle of optimality, as stated by Bellman (1957). However, a difference is that Karp et al.’s definition assumes costs to be real, completely ordered, and additive, whereas we prove that in the context of finding optimal decision trees these assumptions are not needed.
- The _right congruence_ notion discussed in Karp et al. relates to identifying equivalent states, which in DP relates to identifying equivalent subproblems, but is orthogonal to anti-monotonicity and order preservation. Similar to previous DP approaches, our method caches and reuses cached solutions for equivalent subproblems.
- The _multi-objective_ notation from Li et al. has similarities with our proposition (A.4) about combining separable optimization tasks into a new separable optimization task. However, Li et al. also restrict their analysis to solutions in $R^n$. The solution value for each objective is still assumed to be real, completely ordered, and additive, which limits their theory to element-wise additive optimization tasks. Our theory does not share these limitations, which we hope to exploit in future work.
- Our _merge_ operation has similarities with _output concatenation_ by Elmaghraby. However, Elmaghraby assumes that the output space (solution value space) is completely ordered, whereas our work does not make this assumption.
- Smith and Douglas (1980) and Pollock and Smith (1985) both also assume a real-valued completely ordered solution value space.
To strengthen our theoretical contribution we will mention these similarities and extensions on existing theory in our final version.
2. "My understanding is that this paper could possibly be offering a more intuitive way of checking whether these conditions hold for the special case of binary decision trees."
Yes, we agree: one of the contributions of our work indeed is to make DP more accessible for optimizing decision trees. It also corrects previous work that limited DP for optimal decision trees to only additive optimization tasks (Nijssen and Fromont, DM&KD-10) and Lin et al., ICML-20).
**Paper title**
3. "The paper is very well written, but in my opinion, the term "pushing the limits of DP" is not appropriate because the paper does not exhaust all possible DP-based methodologies for these problems. I would suggest something along the lines of "Dynamic Programming Representability of Optimal Decision Trees" to highlight the theoretical contributions of the paper"
We will adopt the suggested title to highlight, as suggested, the theoretical contributions of the paper.
**Benchmarks**
4. "More details needed to justify the benchmarks in the numerical section"
We are surprised by this comment since we invested a considerable effort in the experimental section. We evaluated our approach on four diverse applications, surveyed related work for each application and compared to the state-of-the-art methods for each application (including heuristic and optimal methods) on many datasets. We took extra care to make sure the experiments are detailed and have a clear outcome. Note that the main points are summarized in the paper, but more details are provided in the Appendix (we will emphasize this in the paper). We would be happy to expand the experimental part should the reviewer have further suggestions.
**CP and other methods**
5. "It is not clear from the text if the benchmark methods correspond to the state of the art; e.g., there are nonlinear formulations (such as CP) that could also be used for training"
We surveyed the literature and compared to the state-of-the-art optimal methods for the application domains considered, except for cost-sensitive classification, where for lack of an open source optimal method, we compare with a state-of-the-art method without optimality guarantees. We can clarify this in the text.
As for CP specifically, as far as we know, CP formulations for optimal decision trees have only been proposed for maximizing accuracy and not for the optimization tasks considered in our work. For maximizing accuracy, Aglin et al. (AAAI-20) already showed that the DP approach scales better than the state-of-the-art CP solution.
---
Rebuttal Comment 1.1:
Comment: Thank you for carefully answering my questions and outlining the connection with previous works - I believe this helps in broadening the impact of the methodology. I also appreciate the clarification of the numerical results. My concern here is why those specific four benchmarks were chosen (not sure what was the measure of diversity).
I have updated my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you once again for your comments, we agree that adding the discussion to the paper will indeed deepen our paper.
Regarding the benchmark selection: our aim was to select a diverse set of benchmarks such that it is not easy to trivially adapt the algorithm of one of the applications for the other. There is no formal measure of diversity, but the problem formulations and the applications are intuitively very different. Note that typically, decision tree papers only consider one or two similar benchmarks, whereas we demonstrated the generality of our method on four benchmarks | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper discusses the necessary and sufficient conditions for training optimal classification trees using dynamic programming (DP). In particular, the authors replace the commonly used -- and suficient -- notion of additivity by order preservation, which is shown to be necessary. The authors also present a generalized framework for modeling optimal classification trees using DP, which presents better results in some benchmarks.
*****
Following the rebuttal by the authors, I am updating my score accordingly.
Strengths: The authors present a very comprehensive review of existing work and initial explanation of the ideas. I was not aware of some of the references used, such as Verwer and Zhang's proposing optimal training with MIP at the same time as Berstimas & Dunn.
Along those lines, the authors make it clear why training with DP is beneficial, as well as how much was already done in prior work.
Weaknesses: I have a hard time grasping the meaning of the main result in the paper, Theorem 4.6. This is not so much about its correctness, but rather about the feeling that the definitions preceding it seem reverse-engineered to ensure that they are both necessary and sufficient. For example, there is no example helping the reader understand what order preservation means and how it is more general than additivity. If order preservation is the key element in this paper, my lack of understanding about it makes me unsure about the relevance of the main contribution.
Along the same lines, I would have appreciated a real example and discussion of anti-monotonicity, even if it is a concept already used in other papers. I would have expected that to occur with the examples in Section 4.4, but they are very briefly explained - and in cases such as prescriptive policy generation not explained at all. Moreover, DP has already been previously used to obtain optimal classification trees in all of the cases studied. And if so, what is it that we gain from the more generalized setting described in this paper?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1) Can you please describe the intuition for order preservation and where it would be useful whereas additivity would not?
2) Can you please explain how anti-monotonicity relates to the applications considered?
3) Can you please describe an application that can be addressed by your setting that was not possible previously?
4) It is not clear to me how the STreeD framework benefits from the setting described in this paper to such a point that it outperforms other methods. In your opinion, what makes the setting considered in your work also more convenient computationally?
5) Do you see a possible application of this or a related setting to optimally train decision diagrams for classification?
In terms of notation, I would caution the authors about the use of $\mathcal{D}$ to sometimes represent the entire dataset (as it seems implied in Lines 112 and 165) and sometimes represent a subset of the dataset (such as in the recurrence in equation (2)).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I do not recall seeing a discussion about limitations, but it would be fair to say that outlining what optimal classification trees can and cannot be trained with DP represents an important study about the limitations of a particular form of training algorithm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the review and for the positive words about our work! We here respond to each of your questions:
**Novelty**
1. "What is it that we gain from the more generalized setting described in this paper?"
Recent publications at premier venues on using DP for optimal DTs for varying optimization tasks show that there is a great interest in this topic. E.g.:
Accuracy (Aglin et al., AAAI-20);
Nonlinear metrics (Demirović & Stuckey, AAAI-21);
Group fairness (Van der Linden et al., NeurIPS-22); and
Regression (Zhang et al., AAAI-23).
We provide a general framework that covers all these cases, including regression. Normally, for each of these a specialized method had to be developed. We also precisely characterize necessary and sufficient conditions for the use of DP for new optimization tasks.
This adds to our understanding of decision tree algorithms and allows us to quickly model solutions to new optimization tasks, such as for example, individual fairness, for which currently no optimal DP method exists (see next answer).
2. (Q3) "Can you please describe an application that can be addressed by your setting that was not possible previously?"
Consider individual fairness as an example. Aghaei et al. (AAAI-19) have proposed a MIP method for individual fairness, but currently no optimal DP solution for individual fairness exists yet. However, this is possible within our framework. We provide details at the end of this response.
**Order preservation**
3. (Q1) "Can you please describe the intuition for order preservation and where it would be useful whereas additivity would not?"
Recent work showed that DP could also be used for problems that are not additive, e.g., nonlinear metrics (Demirović and Stuckey, AAAI-21) and group fairness (Van der Linden et al., NeurIPS-22). This motivates the search for the limits of the use of DP for optimizing decision trees, which we provide.
The intuition behind order preservation is the principle of optimality (Bellman, 1957): optimal solutions can only be constructed from optimal solutions to subproblems.
**Anti-monotonicity**
4. (Q2) "Can you please explain how anti-monotonicity relates to the applications considered?"
We model group fairness as an anti-monotonic constraint. If it can be proven for a subtree that it could never be part of a tree that satisfies demographic parity, this subtree can be discarded.
**Performance**
5. (Q4) "It is not clear to me how the STreeD framework benefits from the setting described in this paper to such a point that it outperforms other methods. In your opinion, what makes the setting considered in your work also more convenient computationally?"
We employ dynamic programming, which exploits the fact that subtrees can be solved independently (if the conditions we present hold) and repeated subproblems can be cached. Other methods, such as MIP, do not consider this, so these methods end up doing exponentially more work, which is reflected in our experiments.
The disadvantage of DP is that it is specific to each possible application. The added value of the STreeD framework compared to existing DP approaches addresses exactly this: it provides a general approach to using DP for building optimal decision trees.
**Decision Diagrams**
6. (Q5) "Do you see a possible application of this or a related setting to optimally train decision diagrams for classification?"
Decision diagrams are not separable in the same way as decision trees: i.e., subdiagrams may share nodes, whereas subtrees never share nodes. Therefore, the same breakdown into independent subproblems cannot trivially be applied and the question whether one could devise a DP algorithm that is more efficient for decision diagrams, remains an open question.
**Appendix: Individual fairness separable formulation**
Individual fairness optimizes the number of similar individuals (as defined by some distance function) that receive the same label.
The following provides the details of how individual fairness could be modeled as a separable optimization task.
Let $d(x_1, x_2)$ be a distance function that returns one if $x_1$ and $x_2$ are similar, and zero otherwise. Let $O(D) = \\{ (x_1, x_2) \in D | d(x_1, x_2) = 1 \\}$. Let $n = |O(D)|$ over the original dataset $D$.
Whenever a node is split, the transition function should update the state $s$ to record which pairs in $O(D)$ end up in different subtrees. For the subtree with dataset $D’$, call this record $M = \\{ (x_1, x_2) \in O(D) | (x_1 \in D’ \wedge x_2 \notin D’) \vee (x_1 \notin D’ \wedge x_2 \in D’) \\}$.
A solution value consists of bounds on the worst and best case value for the individual fairness, and a label for each pair in $M$: $(worst, best, L: M \rightarrow K)$. This is computed as follows: $g(D, M, \hat{k}) = (|O(D)| / n, 1, L(m) = \hat{k}, \forall m \in M)$.
The worst case is lower bounded by $|O(D)|/n$ because all pairs in $O(D)$ receive the same label. The best case is still 1, in case all instances in $M$ receive the same label. The label of all pairs in $M$ are set to $\hat{k}$.
When we merge two solution values $(worst_1, best_1, L_1)$ and $(worst_2, best_2, L_2)$ for solutions generated for state $(D_1, M_1)$ and state $(D_2, M_2)$, we check which split pairs in $O(D)$ are joined again: $J = \\{ M_1 \cap M_2 \\}$. Let $v = |\\{m \in J|L_1(m) = L_2(m)\\}|$ be the number of pairs with the same label. The merged solution value becomes:
$(worst_1 + worst_2 + (|J| - v)/n, 1 - ((1-best_1) + (1-best_2) + v/n)$, $L(m) = L_1(m) \text{ if } m \in M_1, \text{otherwise } L_2(m)~\forall m \in M )$.
A solution is dominated if its worst fairness value is higher than another solution’s best fairness value.
A solution is infeasible if its worst fairness value is higher than a predetermined threshold.
The optimization task described above is separable, but not additive. it satisfies all conditions of our framework, and thus results in optimal solutions.
---
Rebuttal Comment 1.1:
Comment: I appreciate the comments by the authors and have updated my score accordingly.
I agree with the points raised by reviewer VtkV about the scope and significance of the work, and I second that reviewer's suggestion of a more specific and meaningful title for this paper. I am counting on the word of the authors about changing it. | null | null | null | null | null | null |
Max-Margin Token Selection in Attention Mechanism | Accept (spotlight) | Summary: This paper aims to provide an optimization-theoretic characterization of the softmax attention model $f(X)=v^{\top}X^{\top}{\rm softmax}(XW^{\top}p)$ by linking it to max-margin problems. The authors established the convergence of gradient decent on $p$ for a fixed $v$ choice, and further explored the joint convergence of $(v,p)$ via regularization path analysis. They also showed that the idea on selecting the optimal token via max-margin can be extended to a general nonlinear model. Their results are verified through numerical studies.
Strengths: - Interesting and important result. Attention has played an important role in large language model. However, the theoretical understanding is relatively lack. This paper links the regularized solution and the gradient decent process to the max-margin solution, which may motivate several possible directions for the further research.
- Clear presentation with adequate explanation.
Weaknesses: - Some assumptions are relatively strong. In Assumption B, they assume all non-optimal tokens have equal scores, which may not be true in practice.
- Lack the convergence of the gradient decent when jointly optimizing $(v,p)$.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Why Lemma 1 implies line 92-94?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and helpful comments.
> **W1:** Some assumptions are relatively strong. In Assumption B, they assume all non-optimal tokens have equal scores, which may not be true in practice.
**R:** Thanks for raising this. We agree that this assumption is fairly strong, however, it is only used within Theorem 1 and nowhere else. Theorem 1 provides a global convergence guarantee for gradient descent and serves as a prelude to the general behavior of the attention’s implicit bias. However, as you can see, our Theorem 3 on the local convergence of gradient descent and all other theorems do not require this assumption. Theorem 3 also clearly establishes that, in general (i.e. when Assumption B does not hold and non-optimal scores are different), the global convergence (to the optimal direction $p^{mm\star}$) can fail due to the existence of locally-optimal directions.
Thus, the only way global convergence (from any initialization) can happen is if $p^{mm\star}$ is the unique locally-optimal direction (per Definition 2). Assumption B is one condition that guarantees this. We believe another condition that guarantees this might be ensuring that “all tokens involved in ATT-SVM are support vectors”. This way, for any choice of non-globally-optimal tokens $\alpha$, there will be some optimal token $opt_i\neq \alpha_i$ that is its SVM-neighbor. Then by Definition 2, $\alpha$ cannot be locally-optimal because $opt_i$ has a higher score than $\alpha_i$. Thus, $p^{mm\star}$ becomes the unique direction satisfying Def 2. While this condition (i.e. all SVM constraints being support vectors) sounds technical, for classical SVM problems, it holds when the embedding dimension $d$ is large [Muthukumar et al. JMLR’21, Hsu et al. AISTATS’21].
Based on this intuition, we provide a new experiment in Figure 4. We solve many random instances of the attention problem for various values of $d$ for 1000 iterations and investigate the convergence behavior of $p(t)$ generated by gradient descent. Experiments were conducted with various values of $d$ for 1000 iterations. The bar plot in Figure 4 distinguishes between non-local convergence (red bars), local convergence (blue bars), and global convergence (green bars). Global convergence is a strict subset of local convergence. In short, in line with our hypothesis, as $d$ grows, we observe global convergence with probability approaching $1$. While we do not have a proof of this, it certainly makes an interesting discussion for our paper and we hope to incorporate it.
> **W2:** Lack the convergence of the gradient descent when jointly optimizing (v,p).
**Response:** While this may appear to be a weakness, we emphasize that the contributions we make are novel and challenging even for $p$-only optimization. For joint optimization, we also provide a regularization path theory that successfully predicts the implicit bias of gradient descent (see Figure 2(b,c)). For joint optimization, there is no remotely similar result in the literature, and we have a surprisingly powerful message (see Sec 3.1): $p$ and $v$ (essentially) converge to their respective max-margin solutions, thus, optimization dynamics of “classification” (v) and “attention” (p) can be decoupled. We genuinely hope that this novel message (and other contributions) will spur interest in the community and invite future research to solve these open problems.
We believe that conducting a separate analysis of attention weights $p$ can offer a clearer and more comprehensive understanding of the implicit bias ingrained in gradient descent for attention mechanisms. To accomplish this, our approach involves the introduction of concepts like token scores and locally-optimal tokens, each of which demands a more comprehensive and detailed explanation. Additionally, we undertake an extensive convergence analysis that aims to capture the optimization dynamics through the lens of local SVM geometry and the conic initialization centered around the max-margin solution.
While we acknowledge the potential benefits of a joint implicit bias analysis involving both $(v,p)$, our experiments showcased in Figure 2 of the paper provide evidence for the feasibility of this approach. However, it's important to note that a comprehensive treatment of these intricate technical details might require the incorporation of several novel techniques. Condensing all these details into a single paper could potentially be overwhelming and hinder the clarity of our main findings.
> **Q1:** Why Lemma 1 implies line 92-94?
**Response:** Thanks for asking this. Recall that, in our attention model $f(p,W)=v^\top X^\top S(XW^\top p)$ in Line 30 , we can either optimize $W$ or optimize $p$. Throughout the paper, we optimize $p$ because Lemma 1 says that optimizing $W$ can be mapped to optimizing $p$. Lemma 1 creates this mapping as follows: Since we will also use $p$ as a variable, fix a vector $a$ and consider the $W$ optimization with $p\gets a$, $f(W)=v^\top X^\top S(XW^\top a)$, and associated loss function $L(W)$ as defined in Lemma 1.
We map these to the function $f_p(p)=v^\top X^\top S(X p)$ and associated loss function $L_p(p)$ where the idea is viewing the combined $W^\top a$ as a single $p$ variable. We prove that **running gradient descent on $p(t)$ with learning rate $\eta$** is same as **running gradient descent on $W(t)$ with learning rate $\eta||a||^{-2}$** and $W(t)$ iterations precisely track $p(t)$ iterations via $W(t)=||a||^{-2}ap(t)^\top$. Note that, for this to happen, initializations $p(0),W(0)$ should match (and that is all!). In general, for any $W$ initialization, there exists a matching $p$ initialization. Lemma 1 states this for a rank-1 $W$ initialization to avoid verbosity. We are happy to address further questions if anything is unclear.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I have read the rebuttal and all the other reviews. To be honest, I am not an expert for this area. After reading the rebuttal, I think this paper has good contributions and the new experiments strengthen the results. Thus I increase my score from 5 to 6.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer CU4X
Comment: We appreciate your time and insights in reviewing the paper. Thank you. | Summary: The paper is clear and well-written.
Understanding the e optimization dynamics and implicit bias is a significant theoretical issue, especially for morden neural network models.
This paper provides a preliminary theoretical analysis of the margin maximization bias of attention-like models.
Theoretically, the authors provide some global convergence and local convergence results to characterize the implicit bias of gradient descent for attention-like models.
Strengths: The authors present a comprehensive theoretical analysis of the implicit bias associated with margin maximization in attention-like models.
Speficically, they provide theories about global convergence, local convergence, and regularization paths in various training scenarios.
This work extends lots of theoretical results on the implicit bias for linear models to the context of attention-like models.
Weaknesses: In this article, the authors investigate a model that bears resemblance to attention but involves a significant degree of simplification compared to standard attention models.
The model employed combines $W_K$ and $W_Q$ into a single matrix $W$, while substituting one of the $X$'s with $p$.
While attention models are still relatively underexplored in the field of optimization theory, I believe that the level of simplification adopted in this study is overly simplistic and may even be somewhat irrelevant.
With such a simplification by the authors, especially for the optimization about $W$ or $p$, it becomes almost a one-layer neural network optimization problem that is largely irrelevant to attention.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: My primary concern is whether the training dynamics of standard attention models $Atten(X)=W_V X \mathbb{S}(X^\top W_K^\top W_Q X)$ closely resemble the training dynamics of the attention-like model in this study.
Specifically, I find it difficult to discern the training behavior of $W_K$ and $W_Q$, which are crucial components in standard attention, from the results presented by the authors.
If the authors could provide further clarification on this matter, I could change my perspective.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: As previously mentioned, the main limitation of the article lies in the oversimplification of the attention model, and it remains uncertain whether the training dynamics of the attention model differ fundamentally from the training dynamics of the attention-like model proposed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and suggestions. Below, we respond to their concerns point by point. We would be happy to respond to future concerns they may have during the discussion period.
> **W1:* This study is overly simplistic... With such a simplification.. it becomes almost a one-layer neural network problem that is largely irrelevant to attention.
**Response:** We respectfully disagree with this assessment for the following reasons:
1. Our attention model is practically relevant: In transformers $p$ corresponds to a tunable prompt or [CLS] token [Oymak et al. ICML’23]. When sample size is $n=1$, setting $p=x_1$ (i.e. first token) and optimizing $W$ (via Lemma 1’s equivalence), our theory specializes to establish the implicit bias of a 1-layer self-attention model.
2. The problem does not become a one-layer neural network optimization. In fact, it is very different for the following reasons:
Softmax nonlinearity is different from applying $T$ nonlinear activation functions individually because it couples the $T$ nonlinearities to induce a probability distribution. This makes it the standard choice in attention/transformer layers and as a loss function (cross-entropy). Crucially, softmax also induces sparsity which is precisely what happens in real attention maps (e.g. see attached Fig 1) or when attention selects the optimal token within our theory.
Feedforward neural nets only multiply weights and features. In contrast, our model $f(X)=v^TX^TS(Xp)$ as well as self-attention multiplies features with each other (i.e. the $X$ term appears twice or more).
3. Finally, we understand the concern that self-attention or transformers may be more practically relevant. On the other hand, we firmly believe our SVM-equivalence framework is fundamental and extensible. To provide a concrete example, we recently discovered that a slight variation of our ATT-SVM seems to predict the implicit bias of self-attention (building on the aforementioned $n=1$ observation). This will be discussed under the next response.
> **Q1:* My primary concern is whether the training dynamics of standard attention models resemble the training dynamics of the attention-like model in this study. I find it difficult to discern the training behavior of $W_K$ and $W_Q$, which are crucial components in standard attention. If the authors could provide further clarification on this matter, I could change my perspective.
**Response:** We acknowledge this concern and respond it in two fronts:
(1) Training behavior of $W_k$ and $W_q$ under our paper’s setting still follows max-margin directions,
(2) We empirically demonstrate that our Attention<->SVM connection is extensible to self-attention.
**(1) Training behavior of $W_k$ and $W_q$:** Standard self-attention calculates $S(XW_qW_k^\top X^\top)$ where $W_q,W_k$ are size $d\times m$. Clearly, when $W_q,W_k$ are full dimensional ($m=d$), we don’t lose any expressivity by merging them into $W_{prod}=W_qW_k^\top$. On the other hand, we acknowledge that the optimization behavior might be different. Fortunately, for the problem $L(W_k,W_q)=\sum_{i=1}^n v^TX_i^TS(X_iW_qW_k^\top a)$:
We can prove a version of Lemma 1 that creates a mapping between $W_q,W_k$ iterations and $p$ iterations for any $d\geq m\geq 1$.
Numerically, we found that $W_{prod}(t)=W_q(t)W_k(t)^\top$ still converges to max-margin direction.
Our experiments are shown in Fig 1. Fig 1(left) is the outcome of our Lemma 1 (iterations on $W_{prod}$ and associated $p$ iterations) whereas Fig 1(right) is the joint $W_k,W_q$ iterations (akin to transformers) and associated $p$ iterations. $W_k,W_q$ still align with the max-margin direction albeit with a slightly different trajectory. To formalize this, we can map the joint gradient updates $W_k(t+1)=W_k(t)-\eta \nabla_{W_k} L(W_k,W_q)$, $W_q(t+1)=W_q(t)-\eta \nabla_{W_q} L(W_k,W_q)$ to the following $p(t)$ iterations on the $L(p)$ objective: Starting with proper $p(0)$ choice and scalar $\nu_0=1$, run
$\nu_{t+1}=\nu_t-\eta\nu_t^{-1}p(t)^\top \nabla L(p(t))$
$p_{t+1}=(\nu_{t+1}/\nu_t)[p(t)-\eta\nu_t^2\nabla L(p(t))]$
This mapping is not as simple as Lemma 1. Regardless, it strongly suggests that our max-margin theory on $p$ should extend to $W_k,W_q$. We emphasize that Lemma 1 is enabled by the vector $a$ being fixed: This way the gradient updates on $W_{prod}$ are rank-1 and stay along $a$ direction. Below, we empirically show that, situation is similar but more intricate for self-attention.
**(2) Extensibility** In Fig 3, we study the self-attention objective
$
\qquad L(W)=\frac{1}{n}\sum_{i=1}^n \ell(Y_i\cdot v^\top X^\top S(X_i W^\top x_{i1})) \qquad $ (SA-ERM)
This corresponds to running linear classification on the first token output of a self-attention layer ($x_{i1}$). We consider a slightly modified ATT-SVM to capture the inductive bias of this
$
\qquad \min_{W} || W||_F \quad \text{subject to} \quad (x_{i\alpha_i}-x_{it})^\top W x_{i1}\geq 1 \quad \text{for all }\quad t\neq \alpha_i, i\in[n] \qquad
$ (S-ATT-SVM)
The intuition is as follows: In our ATT-SVM (with Lemma 1), $a$ is fixed whereas here $a\gets x_{i1}$ is changing for each training example. Fig 3 shows **self-attention solutions directionally align** with (S-ATT-SVM). Empirically, we found that directly optimizing $W_{prod}$ biases the gradient descent towards (S-ATT-SVM) with the Frobenius norm objective, while optimizing $(W_k, W_q)\in\\mathbb{R}^{d\times d}$ separately biases it towards (S-ATT-SVM) with the nuclear norm objective. In our paper's setting (Lemma 1), both objectives coincide because the solution is rank-1 due to fixed $a$. Finally, if $(W_k,W_q)$ are $d\times m$ with $m<d$, we suspect a low-rank constraint in (S-ATT-SVM) is needed.
To sum up, we agree with the reviewer that $(W_k,W_q)$ or self-attention introduce unique behavior, however, based on empirical evidence, the Attention<->SVM connection introduced by our paper remains valid.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reviewer's response. Now I acknowledge the insights provided by this work. However, I still have the following concerns:
1. I am still confused about the model. In practice, it is common to use $Atten(X)$ based on the dot-product $\left<W_KX, W_Q X\right>$. The model in this work is a bit strange, and it is unclear why the authors changed the $W_K^\top W_QX$ in it to $W p$. At least, the authors should suggest some motivation for changing $Atten(X)$ in this way
2. In practice, it is hard to train Attention-based Transformer models, i.e., the loss is difficult to converge to $0$. However, the margin-maximization implicit bias is usually observed at the terminal stage of training. Can this implicit bias be observed in the practical setting of training Transformer?
3. Most methods and techniques in this work are extended from the linear model (Ji and Telgrasky), such as the regularized path. Could the authors briefly clarify the difference in proof technique?
---
Reply to Comment 1.1.1:
Title: Response to Questions 1 and 2
Comment: > **Q1-1:** I am still confused about the model. In practice, it is common to use $Atten(X)$ based on the dot-product $<W_KX, W_QX>$. The model in this work is a bit strange, and it is unclear why the authors changed the in it to $W_K^\top W_QX$ to $Wp$.
**Response:** As mentioned in our initial response, the main claim that
softmax-attention weights, trained with gradient descent, converge to a max-margin solution that effectively separates locally-optimal tokens from non-optimal ones
unsurprisingly **remains valid when optimization problem is formulated using $ W_KW_Q^\top$ decomposition** and the gradient descent updates $ W_K,W_Q$ seperatly. Our new experiments (Figure 1 in the attached file) and subsequent analyses, which establish a correlation between $p$ and the matrices $W_K$ and $W_Q$, further shows that their combined product matrix $W_{\text{prod}} = W_KW_Q^\top$ asymptotically approaches the solution seen in our SVM.
Note that $W_{\text{prod}}$ is what matters for the eventual model; this is why we directly used it in our exposition. However, based on the reviewer's concern
- We will start our exposition with $(W_K, W_Q)$ and then combine them into $W_{\text{prod}}$.
- We will also discuss the mapping we constructed for $W_K, W_Q$ and explain that their $W_{\text{prod}}$ is also similarly predictable and goes to the SVM solution empirically.
> **Q1-2:** At least, the authors should suggest some motivation for changing $Atten(X)$ in this way.
**Response:** Our attention model is $f(X)=\left<Xv,\mathbb{S}(X W^\top p)\right>$. There are three motivations for this:
**I. Prompt-Tuning and '[CLS]' Token**: In practice, the model arises from prompt-tuning [Lester et al. EMNLP’21] or the '[CLS]' token [Devlin et al. NAACL’19]. To see this, following [Oymak et al. ICML'23] (see their Sec 2.1), let $p$ be a tunable prompt (attached to input $X$) and consider the cross-attention between prompt-attached input $[X;p]$ and $[X]$. Within this cross-attention layer, this results in the output
$\qquad \text{Attn}([X;p],X)=\mathbb{S}([X;p]WX^\top)XV=[\mathbb{S}(XWX^\top)XV; \mathbb{S}(p^TWX^\top)XV]$
$\qquad$ Thus, the output associated with the tunable prompt has the form $V^\top X^\top \mathbb{S}(XW^\top p)$ which is exactly our model.
**II. Unveiling the Self-Attention Mechanism:** The model recovers self-attention when the sample size is $n=1$. Observe that *first token output* of the self-attention is given by $\text{Attn}(X)=V^\top X^\top \mathbb{S}(X W^\top x_1)$. Setting $a\gets x_1$ in Lemma 1, our results recover this self-attention setting. That is also how we recently discovered implicit bias of self-attention admits an SVM for $n>1$. Please see initial response and the connection between SA-ERM and S-ATT-SVM.
**III. Connection to Matrix Factorization:** The model is fundamental in nature. If you remove the softmax, it becomes a rank-1 matrix learning model where the goal is learning $(v,p)$ from labels of the form $y=v^\top X^\top Xp=\left<X^\top X,pv^\top \right>$. There is a vast literature on this. We believe extending such a matrix factorization viewpoint to softmax nonlinearity is mathematically fundamental and interesting.
> **Q2.** In practice, it is hard to train Attention-based Transformer models, i.e., the loss is difficult to converge to 0. However, the margin-maximization implicit bias is usually observed at the terminal stage of training. Can this implicit bias be observed in the practical setting of training Transformer?
**Response:** The loss in Theorems 1-4 **does not converge to zero**. This is because we keep $v$ fixed and only train $p$. Since $v$ remains fixed and attention outputs a convex combination of tokens, the model cannot drive the output to $\infty$ and reduce the loss to zero. We believe that this distinction also represents a **significant difference** from margin maximization in logistic regression applied to separable data [Soudry et al. JMLR’18, Ji and Telgarsky ICLR 2019]. We offer empirical evidence that self-attention's implicit bias can be predicted through a minor adjustment of our ATT-SVM. Additionally, in Section 4, we elaborate on how these theoretical insights extend to nonlinear prediction heads, such as MLP.
Given these considerations, we maintain that our findings offer valuable insights for transformers. At the very least, our experiments with real data (refer to attached Fig 2) demonstrate that the empirical phenomena in the optimization dynamics of transformers align with our theory. Specifically, softmax/attention maps become sparser over time, while the norm of the attention weights continues to increase over the same period. This observation essentially aligns with our theoretical proposition: softmax saturates on optimal tokens, and the weights tend toward infinity. | Summary: The paper focuses on the attention mechanism which is commonly used in transformer architectures. In particular, the authors introduce a certain attention model and investigate its optimization dynamics and inductive biases under various assumptions on token's scores. In particular, the setting is a single-head attention mechanism trained by gradient descent and with decreasing losses, such as logistic or linear loss for binary classification. The results mainly hold for attention with linear head and fixed classifier head however some of the results hold without these limitations. The first main contribution of the paper is proving (under the assumption that all non-optimal tokens have the same score value) directional convergence of tunable prompt (denoted by $p$) to a certain max-margin solution which separate one token from the rest of tokens for each input. However, the imposed assumption on score values can be limiting; thus the authors assert that proving local convergence is possible if the initialization is within a cone of the final solution. The paper also studies some extensions such as the joint optimization of the classifier head and trainable parameters and demonstrate that under a specific label margin conditions, the classifier head and trainable parameters converge to their respective max-margin solutions. Some numerical results on synthetic data are designed to validate the claims of the paper.
Strengths: The paper is the first work on the implicit bias behavior of GD for the attention mechanism. Understanding implicit bias is crucial for several directions such as fairness, optimization behavior and generalization bounds. Some assumptions of this work are idealized, but that is fine given that it one of the first solid works in this direction. The most important aspect of the paper is giving a formalized understanding of the attention mechanism as a token-selection mechanism and providing sufficient conditions for convergence to a solution which favors optimal tokens. By connecting the attention mechanism to the implicit bias literature and the max-margin SVM formulation (which for the attention mechanism takes a new and interesting form), the study establishes a solid foundation for future research. The required work for obtaining the results is non-trivial and sufficiently distinct from previous works in implicit-bias literature. The paper is also well-written in most parts, although some parts of the paper become very technical with little intuition and thus it can be difficult to clearly understand the results such as the results in section 2.3.
Weaknesses: some questions and minor weaknesses and suggestions:
- Regarding theorem 1, the fact that any initialization leads to convergence is rather counter-intuitive. Does the assumptions of Theorem 1 imply that the problem convex or is there any other reason?
- Also related to theorem 1, it is not trivial why parameters norm ($|| p_t ||$) is diverging since $p_t$ is inside the soft-max. can the authors please explicitly specify the behavior of $|| p_t ||$ in the statement of theorem? We know for GD on ReLU neural networks that parameters norm diverges, but that is due to using ReLU non-linearities which implies that the loss prefers large first-layer weights.
- The contributions section can be more specific by providing more details for contributions; for example in line 52 the authors can be more specific in explaining their contributions for the model with non-linear head and specifing the key distinctions with previous parts.
- How are the values of $\mu$ and $R$ determined in theorem 3? This can be insightful as they are the key parameters in specifying the cone.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: please see the section above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors adequately discuss the limitations throughout the paper. I do not see any potential negative impact with this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for their thorough feedback and helpful suggestions.
>**W1:** Is the problem in theorem 1 convex, any other reason?
**R:** Thank you for the great question. First, let us clarify that the problem is not convex even under Assumption B. One reason is that Assumption A actually allows for very general nonconvex loss functions. However, even for the simplest setting (we can come up with), the problem is not convex: Concretely, let us set
- $\ell(x)=-x$
- Pick n=1 samples and $T=2$ tokens. Make tokens unit $\ell_2$ norm and orthogonal.
- Set $v=x_1$, $p=cx_1$ with $c\in \mathbb{R}$. This way $x_2^\top v=x_2^\top p=0$.
Following this, the training objective takes the form $max_p L(p):= \mathbb{S}([c, 0])_1 = \frac{e^c}{1+e^c}$. This is the standard logistic function and is not convex or concave. We will add this example to the paper.
Rather than convexity, our proof relies on establishing favorable **gradient correlation** presented in Lemmas 3 and 5. For instance, in Lemma 5, we establish that for any choice of $\pi$, there exists $R_{\pi}$ such that:
$ \langle \nabla \cal{L}(p), \frac{p}{|| p||}\rangle \geq (1+\pi) \langle \nabla \mathcal{L}(p), \frac{p^{mm}}{|| p^{mm}||} \rangle.$
If $\mathcal{L}(p)$ was convex, it would actually help with establishing the above. Instead, our analysis could be perceived as a directional convergence version of the restricted secant inequality [Karimi et al. ECML’16] where the gradient behaves nicely towards the $p^{mm}$ direction. Via gradient correlation, we are also establishing the **weak convexity** of $\mathcal{L}$: As $p$ converges in direction to $p^{mm}$, it can be demonstrated from Lemma 4 that $\nabla \mathcal{L}((1+\pi) ||p||p^{mm}) \rightarrow 0$, which implies the weak monotonicity of the gradient and the weak convexity of $\mathcal{L}$.
> **W2:** It is not trivial why parameters norm ($p(t)$) is diverging since $p(t)$ is inside softmax. Can the authors please explicitly specify the behavior of $||p(t)||$?
**R:** We will provide a better discussion in the final manuscript. The intuition is as follows: Softmax output is a probability vector, thus the attention output $f(X)=\left<Xv, \mathbb{S}(Xp)\right>$ creates a convex combination of token scores $\gamma=Xv\in \mathbb{R}^T$ (here we simply set label $Y=1$). When $v$ is fixed, $\gamma$ is fixed and, since the loss function is decreasing, the smallest training loss the attention model can achieve is by **assigning all softmax probability to the tokens with the highest score**. Otherwise we are strictly worse off when the convex combination contains some non-optimal tokens. Thus, we want probability 1 for optimal tokens and 0 for others. On the other hand, softmax with finite weights cannot accomplish this because softmax output is strictly positive. This is precisely why norms go to $\infty$: Softmax asymptotically sets token probabilities to $1$ and $0$.
To provide a GD-specific intuition, Lemma 4 in the supplementary shows: $\langle \nabla \mathcal{L}(p), p^{mm} \rangle < 0$ for all finite $p \in \mathbb{R}^d$. Consequently, there are no finite critical points $p$ for which $\nabla \mathcal{L}(p) = \mathbf{0}$. This implies that $|| p\left(t\right) ||\rightarrow \infty$. In our Thms 1 and 3, we prove that $||p(t)||$ diverges and aligns with the SVM solution, and attention maps $\mathbb{S}(Xp)$ select optimal tokens as $t$ grows. However, the precise quantification of behavior of $||p(t)||$ remains open.
> **W3:** Contributions section can be more specific, specifically for nonlinear head
**R:** Thank you for the suggestion. We will incorporate the following in the main text:
To establish the margin maximizing nature of attention under broader conditions, we study the general model $f(\mathbf{X})=\psi(\mathbf{X}^\top \mathbb{S}(\mathbf{X}\mathbf{W}^\top p))$ where $\psi$ is a nonlinear head. This setting poses challenges as we lack a clear score function, unlike the previous sections. To address this, we introduce a generic condition that splits the tokens of each input into an optimal set and a non-optimal set. Non-optimal tokens are those that strictly increase the training risk if they are not fully suppressed by attention probabilities $\mathbb{S}(X_iW^\top p)$.
> **W4:** How are the $\mu$ and $R$ determined in Thm 3?
**R:** As stated in Step 1 of the proof sketch on page 6, we $\mu$ is a function of the margin of the entire dataset. Specifically:
$\delta := \frac{1}{2}\min_{i\in[n]}\min_{t\in \cal{T}i,\tau\in \bar{\cal{T}}i}(\mathbf{k}{it}-\mathbf{k}{i\tau})^\top p^{mm}, \quad A := \max_{i\in[n],t\in[T]} ||\mathbf{k}_{it}|| \cdot || p^{mm}||, \quad \mu = \frac{1}{8}\left(\frac{\min(0.5,\delta)}{A}\right)^2.$
Furthermore, Lemma 3 reveals that $R$ is inversely dependent on both $\mu$ and the score gap:
$ R \geq \frac{\max(2,\delta^{-1})}{||p^{mm} ||}\log\left(\frac{64T\Gamma A}{\boldsymbol{\gamma}}\right),$
where $\boldsymbol{\gamma} = \min_{i\in[n]}\boldsymbol{\gamma}_i$ represents the worst-case score gap across all inputs.
The definition of $\mu$ offers valuable insights into the gradient descent initialization process. When tokens are more separable, $\mu$ increases, leading to a reduction in correlation ($1-\mu$) and an expansion of the initialization cone. Similarly, $R$ exhibits an inverse relationship with both $\mu$ and the score gap $\boldsymbol{\gamma}$. When the data's score gap and $\mu$ are large, gradient descent can commence with a small norm, allowing for initialization within a wider cone around $p^{mm}$.
We will integrate the aforementioned insights into the discussion of Theorem 3, emphasizing the significant impact of considering the interplay between $\mu$, the score gap, and the properties of tokens on the gradient descent initialization process.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I will maintain my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 8XUp
Comment: Thank you for your time and effort in reviewing our paper. | Summary: This work studies the mechanism for relevant token selection in the attention model by drawing connections with the implicit bias literature and max-margin SVM formulation. The authors consider the prompt attention model $f(X)=v^TX^T\text{softmax}(XW^Tp)$, with tokenized input $X$, value weights $v$, key-query weights $W$, and tunable token/prompt $p$. They show that running gradient descent (GD) on $p$ and $W$ (for fixed $W$ and $p$, respectively) is equivalent, so they consider optimizing only $p$ for fixed $W$.
They consider a data setting where the quality of $t^{th}$ token of input $X$ is determined by its scores $Yv^Tx_t$, with the globally optimal tokens being the ones with the highest scores.
First, they consider optimizing $p$ when $v$ is fixed using a loss that is decreasing and smooth. In this setting, they show the following results:
- Under an assumption on the token scores, $p$ converges (in direction) to the global max-margin solution, which separates the globally optimal tokens from the rest.
- They also develop a regularization path analysis to show global convergence by relaxing the said assumption.
- The main result shows that with an appropriate initialization and a small enough step size, the GD iterates of $p$ converge (in direction) to a local max-margin solution that separates the locally-optimal tokens from the rest. Here, locally optimal tokens are defined as the ones which have higher scores than their SVM neighbors.
Next, they consider the joint optimization of $p$ and $v$ using logistic loss. In this case, for a given $p$, if the resulting features are separable, $v$ has an implicit bias to converge to the max-margin solution as the problem is linear in $v$. Here, optimal tokens as the ones that maximize the downstream label margin. They show that:
- When the attention features (for the max-margin $p$) are all support vectors (for the respective max-margin $v$), both $v$ and $p$ converge to their respective max-margin solutions.
- When this is not the case (i.e. the attention features are not all support vectors), $p$ asymptotically selects one token per input, and it suffices to select tokens with the highest scores while also mixing other tokens. This does not impact the margin of $v$, which still converges to the max-margin solution.
Throughout, the authors illustrate these results through numerical experiments.
Strengths: 1. This work gives interesting theoretical insights into the mechanism for relevant token selection in the attention model.
2. It lays the groundwork to analyze attention mechanism using the lens of implicit bias and motivates several interesting directions for future work.
3. Overall, it is an interesting paper, with clear exposition and well-connected ideas. It makes several meaningful contributions (as listed in the summary).
Weaknesses: 1. The experimental results seem limited and the paper would benefit from the inclusion of more experiments.
2. Some points of discussion can enhance intuition, and certain aspects regarding figures need some clarification.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Experiments:
- The numerical experiments illustrate the theoretical results well. However, it would be good to include additional experiments on semi-synthetic or real datasets, such as those considered in [1], [2].
- Fig. 1(b) illustrates the local convergence of $p$ in the red or blue direction, depending on the initialization. It would be helpful to see if there are cases when the GD iterates do not converge to either of the two solutions shown in Fig. 1(b). This would give some insight into the gap between the empirical observations and the theoretical result for this case.
2. Discussion/clarification:
- Points of discussion:
- Generally, margin maximization refers to separating samples/features from two classes, whereas in this work, the main contribution is to show that the tunable prompt converges to a solution that separates the globally/locally optimal tokens from non-optimal ones. I suggest including more discussion on this part in the introduction. Relatedly, it would help to clarify lines 36-37.
- In Section 2, choice of $v$ determines the scores, and hence the solution learned by $p$. Some discussion on this would be nice.
- The data setting is interesting, but since there are a lot of cases, it would be helpful to show some connections between (some of) the synthetic data setting(s) and some datasets that are commonly used in practice.
- In the discussion on convergence to the local max-margin soution, and the description of Fig. 1(b) in Section 2.2, it would be helpful to clarify that depending on the initialization, the GD iterates will converge to either the global max-margin (when initialized in the cone associated with that solution) or the local max-margin.
- Figures:
- In Fig. 1(a), it is unclear what the role of the blue line (local max-margin solution) is, since both the non-opt tokens take the same value.
- Fig. 1(c) description needs some clarification. The separating hyperplane in. the figure looks like it's the max-margin solution for the teal datapoint, but not the max-margin across all three colors (margin with green is small).
- In Figs. 2(b) and (c), it would be helpful to have the legend for red and blue curves associated with $p$ and $v$, respectively.
- Other:
- There are some typos/inconsistencies in the proof of Lemma 1 that should be corrected.
- Theorem 2 shows global convergence of $p$ via regularization path analysis. It can be moved above assumption B to improve flow, as it is more general.
- In Section 2.2, it would be helpful to use some specific notation for the solution that separates the locally optimal tokens from the rest ($p^{mm}(\alpha)$), such as $p^{mm}_l$.
- Fig. 2(a) comparing the transient dynamics for correlation and logistic loss is interesting. However, it needs a minor clarification in the description. It is stated that when $p$ selects the optimal token, the gradient norm $\propto \gamma_i$ for correlation loss, and $\propto \gamma_ie^{-\gamma_i}$ for logistic loss, and we can compare loss for tokens with different scores. However, if $p$ selects the optimal token, the score would be fixed. Maybe, it can be rephrased to “if $p$ selects the token with score $\gamma_i$”.
- In Section 3.1, label margin is defined as $\frac{1}{||v_{mm}||}$, but in the example, label margin $\gamma$ is defined as $||v_*||$. It would be helpful to mention what $v_*$ is for clarity.
References:
[1] Samet Oymak, Ankit Singh Rawat, Mahdi Soltanolkotabi, and Christos Thrampoulidis. On the role of attention in prompt-tuning. In ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models, 2023.
[2] Hongkang Li, Meng Wang, Sijia Liu, and Pin-Yu Chen. A theoretical understanding of shallow vision transformers: Learning, generalization, and sample complexity. arXiv preprint arXiv:2302.06015, 2023.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review and helpful suggestions, they will definitely help improve the paper.
> **Q1:** The numerical experiments such as those considered in [1], [2].
**R:** Following your suggestion, we conducted additional experiments using real-world datasets to further substantiate our hypothesis concerning the optimization dynamics in transformers and attention. Our theory successfully anticipates two crucial empirical phenomena:
- The attention map (i.e. softmax output) becomes more sparse over time by focusing on the most informative tokens.
- This is achieved by the norm of the attention weights ($W$) growing over time and leading to a "saturating" effect on softmax, resulting in a sparse pattern.
Our experiments in Figure 2 verify both of these predictions. We train a vision transformer (ViT-base) model from scratch with the CIFAR-10 dataset for 400 epochs with fixed learning rate $3\times 10^{-3}$.
- In Figure 2 (left), we present the progressive change in attention weights of the [CLS] token (which corresponds to our $p$ parameter) during training, computed from all attention heads within the model.
- In Figure 2 (right), we display the norm of attention weights and the sparsity level of attention maps averaged over all layers. We used $($L1norm/L2norm$)^2$ of the attention maps as a soft-sparsity measure, where a smaller value indicates a sparser vector.
Initially, during the early epochs of training, the attention weights are randomly distributed, leading to a dense pattern. However, as training progresses, the weights grow, causing the attention map to gradually become sparser in line with our Thm 1-3.
> **Q2:** Scenarios where GD doesn’t converge to locally-optimal directions
**R:** Thank you for the comment. We've examined the gradient descent-generated behavior of $p(t)$ in Figure 4 (attached) with random initialization. We ran experiments with varying $d$ values for 1000 iterations. We find that, **(1, red bar)** for small $d\in[2,5]$, $p(t)$ does not have to saturate softmax i.e. may not converge to a local ($p^{mm}$) or global ($p^{mm\star}$) max-margin direction. We suspect this is because the **ATT-SVM is not feasible for small $d$**. **(2, blue bar)** For larger $d$’s, $p(t)$ indeed converges to a locally-optimal direction. **(3, green bar)** As $d$ gets even larger, $p(t)$ converges more frequently to the globally-optimal direction; please also see table below.
| d | Finite p(t) | % of $p(t) \rightarrow p^{mm}$ | % of $p(t) \rightarrow p^{mm \star}$ |
|-----|--------|-------|--------|
| 2 | 69.1 | 30 | 10.4 |
| 5 | 5.9 | 92.8 | 17.9 |
| 10 | 0 | 99.7 | 38.3 |
| 100 | 0 | 99.5 | 92 |
| 300 | 0 | 100 | 99.1 |
| 500 | 0 | 100 | 99.8 |
> **Q3:** Clarify the main contribution (nature of ATT-SVM) and lines 36-37
**R:** Thank you for the great suggestion. We will incorporate the following discussion in the main text:
Gradient descent on logistic loss and separable datasets converges to the hard margin SVM solution for linear classification [Soudry et al. JMLR’18, Rosset et al. NeurIPS’03, Telgarsky ICML’13]. Similarly, the attention layer in neural networks, utilizing softmax nonlinearity, exhibits behavior resembling margin-maximizing solutions. However, attention mechanism operates on input tokens rather than performing direct classification. Thus, it aims to separate tokens within input sequences, favoring SVM-like solutions, represented by (ATT-SVM). Formalizing this intuition is challenging due to the highly nonconvex nature of the optimization landscape caused by the softmax operation.
> **Q4:** Discussion on choice of $v$ and scores
**R:** Agreed, we will elaborate on the score definition and how the choice of $v$ impacts the solution learned by $p$.
> **Q5:** The data setting is … used in practice.
**R:** Please refer to our response to your **Q1**. We also emphasize that Def 2 and Thm 3 notably apply to general datasets. We only need ATT-SVM to be feasible.
> **Q6:** In the discussion on convergence, clarify dependence of GD on initialization
**R:** Thank you for the suggestion. We will highlight that convergence of attention weights depends on the point of initialization and GD can potentially converge to any of the locally-optimal directions per Def 2.
> **Q7:** In Fig. 1(a), the role of the blue line is unclear
**R:** Agreed with the reviewer. In Fig 1(a), the blue line is unnecessary and will be removed. We will also clarify Fig 1(c) by providing additional explanatory notes. In Fig 2(b) and (c), we will include a legend to denote the red curve associated with $p$ and the blue curve associated with $v$, providing clear identification for each curve.
> **Q8:** typos/inconsistencies in the proof of Lemma 1 that should be corrected.
**R:** Thanks for catching this. We also noticed it after submission. We replaced all $q$ terms with $p$ and we are now consistently using $\cal{L}_p$ and $\cal{L}_W$ to distinguish the optimization objectives of $p$ and $W$.
> **Q9:** Theorem 2 is more general.
**R:** Totally agreed. We'll include your suggestion in the final paper version.
> **Q10:** In Section 2.2, it would be helpful … such as $p^{mm}_l$.
**R:** Thanks, good point. We may include this in the final paper.
> **Q11:** Clarify Fig. 2(a) … “if $p$ selects the token with score $\gamma_i$.
**R:** We will clarify that $n=2$ and there are two optimal tokens with scores $\gamma_1=1,\gamma_2=C$ (non-opt score is 0). Also the sentence “$p$ *approximately* selects optimal tokens” refers to softmax output assigning high-probability to the optimal tokens. This means $\approx 1$ probability (which indeed occurs as we run GD longer) but not necessarily $=1$.
> **Q12:** Clarify $v_*$ and its margin.
**R:** Thank you, we will clarify that $v^{mm}=v^*/||v^*||^2$ resulting in $1/||v^{mm}||=||v^*||$.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses. All concerns have been addressed and the experimental results shared by the authors further strengthen the paper. Hence, I have increased the score to 8.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 5i57
Comment: We thank you for your thorough review and valuable suggestions that significantly improved the quality of our work. | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and efforts of the reviewers. We highlight the **main contributions (C1-C3)** of the paper and present **new experiments (E1-E4)** along with explanations for the corresponding **attached figures (Figs 1-4)**. We would be grateful to respond to any reviewer inquiries during the discussion period.
**C1:** As Reviewer 8XUp kindly states: "The paper is the **first work** on the implicit bias behavior of GD for the attention mechanism". Implicit regularization is extensively studied for linear models and standard neural net architectures [Soudry et al. JMLR’18; Gunasekar et al. NeurIPS'18; Arora et al. NeurIPS’19; Li, Wang et al. NeurIPS’22; Frei et al. ICLR’23 and more]. However, the attention mechanism remains an important unexplored topic. We investigate attention's optimization landscape and analyze its implicit bias, shedding light on the role of softmax nonlinearity. Soudry et al. and others connect logistic regression to standard SVM, which separates inputs based on their labels. Instead, we show attention is biased towards ATT-SVM which separates and selects optimal tokens within the input sequences.
**C2:** We make innovative and nontrivial theoretical contributions:
- Defs. 1 and 2 introduce novel concepts of **token scores** and **locally-optimal tokens**, and Lems 2-5 present innovative theories integrating these concepts with the softmax nonlinearity's special structure.
- Gradient analysis (specifically Thm 3) requires novel proof ideas that capture the optimization dynamics in terms of the local SVM geometry and conic initialization around the max-margin solution.
- Our regularization path analysis in Secs. 3 and 4 yield nontrivial findings, remarkably predicting the implicit bias of gradient descent when jointly optimizing $(v,p)$.
**C3.** Reviewers sy1C, 5i57, and 8XUp acknowledge the potential of **our work to open new research avenues**. We see the connection between attention and ATT-SVM as fundamental, providing a general framework to comprehend complex architectures and generalization dynamics, akin to the logistic regression and classical SVM connection in deep learning theory [Soudry et al. JMLR’18, Rosset et al. NeurIPS’03, Telgarsky ICML’13 and more]. See **E3** below for supporting evidence.
**Supporting Experiments**
**E1:** Reviewer iE2T remarks that transformers use separate key and query weights $(W_k, W_q)$, while our approach uses the combined matrix $W_{prod}:=W_q W_k^\top$. In Fig 1 (attached), we demonstrate that regardless of whether we optimize $W_k$ and $W_q$ separately or as $W_{prod}$, the resulting **trajectories align in direction with the ATT-SVM solution**. As predicted by Lemma 1, $W_{prod}$ converges to a rank-1 matrix of the form $a {p^{mm}}^\top$, where $p^{mm}$ is the parameter obtained by running gradient descent on the $p$ parameter. We use the objective
$
\qquad L(W_k,W_q)=\frac{1}{n}\sum_{i=1}^n \ell(Y_i\cdot v^\top X_i^\top \mathbb{S}(X_i W_k W_q^\top a)) \qquad $ (QK-ERM)
with a fixed vector $a$, in accordance with Lemma 1.
More in the response to Reviewer iE2T.
**E2:** Our theory predicts crucial empirical phenomena in transformer optimization dynamics:
- The attention map (i.e. softmax output) becomes more sparse over time by focusing on the most informative tokens.
- To do so, the norm of the attention weights $W$ should grow over time and “saturate” softmax towards a sparse pattern.
In support of this, Fig 2 (left) displays the evolving attention map of the [CLS] token during training, aggregated from all attention heads of a vision transformer. Fig 2 (right) shows the average norm of attention weights and sparsity level of attention maps across all layers. Initially dense and random, the attention weights gradually grow, leading to a sparser attention map.
More in the response to Reviewer sy1C.
**E3:** In Fig 3, we consider the **self-attention** objective
$
\qquad L(W)=\frac{1}{n}\sum_{i=1}^n \ell(Y_i\cdot v^\top X^\top S(X_i W^\top x_{i1})) \qquad $ (SA-ERM)
This corresponds to running linear classification on the first token output of a self-attention layer ($x_{i1}$). We recently discovered a slightly modified ATT-SVM can predict the implicit bias of Self-Attention
$\min_{W} \left\Vert W \right\Vert_F \quad \text{subject to} \quad ( x_{i\alpha_i}-x_{it})^\top W x_{i1} \geq 1 \quad \text{for all } \quad t\neq \alpha_i, i\in[n] \qquad $ (S-ATT-SVM)
Fig 3 shows self-attention solutions directionally align with (S-ATT-SVM). Empirically, we found that optimizing $W_{prod}$ biases gradient descent towards (S-ATT-SVM) with the Frobenius norm objective, while optimizing $(W_k, W_q)$ separately biases it towards (S-ATT-SVM) with the nuclear norm objective. In our paper's setting (Lemma 1 and **E1** above), this distinction vanishes as we fix $x_{i1}\gets a$, leading to a rank-1 matrix solution for S-ATT-SVM.
More in the response to Reviewer iE2T.
**E4:** We study the convergence of attention with random problem instances with $n=4, T=6$ and varying dimension $d = 2,5,10,100,300,500$. We find that,
- **(1, red bar)** for small $d=2,5$, $p(t)$ generated by gradient descent does not have to saturate softmax i.e. may not converge to a local ($p^{mm}$) or global ($p^{mm\star}$) max-margin direction. This is likely because the ATT-SVM is not feasible for small $d$.
- **(2, blue bar)** For larger $d$’s, $p(t)$ indeed converges to a $p^{mm}$.
- **(3, green bar)** As $d$ gets even larger, $p(t)$ converges more frequently to the $p^{mm\star}$ (for $d\geq 300$, 99% of the problem instances). We believe over-parameterization (large $d$) enables global convergence because as $d$ grows, the global direction $p^{mm\star}$ becomes the only direction that satisfies our Definition 2 (local optimality). Thus, local and global convergence coincide.
.
More in the response to Reviewers CU4X & 5i57.
Pdf: /pdf/eae942c9f6f6375407146729d36f47d9ee92e2ce.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes to give a mathematical explanation and analysis for widely used attention mechanism. They formulate normal attention, self-attention and prompt tuning into one single formulation. And they connected attention to max-margin problems.
Strengths: 1. This paper proposes a mathematical analysis for attention mechanims, which deepen our understanding of the operator.
2. The authors formulate several kinds of attentions and prompt-tuning into one single formulate, which is practical and novel.
Weaknesses: 1. To be honest, I am not an expert for this area. I appreciate the author's effort on the mathmatical part for deep learning.
However, I think it would be better to give conclusions and design guidances based on the authors' observations.
For example, can we link and explain some phenomena or pains during attention-based model training? Can we improve or accelerate training by improving network structure or losses?
2. The authors discuss attention modules based on isolated simple operators. I think it is very helpful.
However, I wonder if we extend to real large-scale attention based models, do the conclusions remain the same?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Please address the questions in weakness part.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: As indicated in weakness part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and helpful suggestions.
> **W1:** To be honest, ... improving network structure or losses?
**R:** Thank you for your questions. In response to reviewer’s concern, under **W2**, we provide and discuss real data experiments which demonstrate that our theory successfully predicts important empirical phenomena related to the optimization dynamics of transformers and attention mechanisms. To provide a more general perspective, as highlighted by Reviewer 8XUp, the recognition of implicit bias is essential across various avenues, including but not limited to fairness, behavior optimization, and generalization bounds. We provide further discussions on these aspects as follows:
- **Fairness and Bias Mitigation**: Gradient descent is a fundamental optimization algorithm widely used in the training and fine-tuning of large language models. Language models trained using gradient descent can inherit biases present in the training data. Understanding the implicit bias of gradient descent in this context allows researchers to identify and mitigate biases, ensuring fairness and ethical use of language models.
- **Generalization Bounds**: The implicit bias of gradient descent is closely tied to the generalization capabilities of trained models. Understanding this relationship helps in establishing theoretical bounds on a model's generalization performance. Specifically, the implicit bias of gradient descent influences how well language models can apply their learned knowledge to new language tasks. Understanding this relationship helps in determining how effectively language models generalize to various language-related challenges.
- **Robustness and Regularization**: Implicit bias can influence the regularization properties of gradient descent. By understanding how the algorithm tends to favor certain solutions, we can develop regularization techniques that encourage better model generalization and robustness against noise and overfitting.
- **Algorithmic Choices**: Knowledge of the implicit bias of gradient descent helps in selecting appropriate optimization methods when training language models. Different algorithms exhibit varying biases, and understanding these nuances can guide the choice of optimization approach based on the desired behavior of the language model.
In summary, acknowledging implicit bias in gradient descent is crucial for exploring the fairness, optimization behavior, and generalization bounds of training LLM algorithms. Our theory takes the initial steps in this direction.
> **W2:** The authors ... the conclusions remain the same?
**R:** Thank you for your suggestion. We have carried out additional experiments to demonstrate the expansion of our findings to real large-scale attention-based models. These experiments also serve to illustrate how our theory proficiently anticipates two essential empirical phenomena associated with the optimization dynamics of transformers and attention mechanisms:
- The attention map (i.e. softmax output) becomes more sparse over time by focusing on the most informative tokens.
- This is achieved by the norm of the attention weights ($W$) growing over time and leading to a "saturating" effect on softmax, resulting in a sparse pattern.
Our experiments in Figure 2 verify both of these predictions. We train a vision transformer (ViT-base) model from scratch with the CIFAR-10 dataset for 400 epochs with a fixed learning rate $3\times 10^{-3}$.
- In Figure 2 (left), we present the progressive change in attention weights of the [CLS] token (which corresponds to our $p$ parameter) during training, computed from all attention heads within the model.
- In Figure 2 (right), we display the norm of attention weights and the sparsity level of attention maps averaged over all layers. We used $($L1norm/L2norm$)^2$ of the attention maps as a soft-sparsity measure, where a smaller value indicates a sparser vector.
Initially, during the early epochs of training, the attention weights are randomly distributed, leading to a dense pattern. However, as training progresses, the weights grow, causing the attention map to gradually become sparser. Consequently, the attention map starts to focus on fewer salient patches within the image that possess distinct features that aid in classification.
In light of the concerns raised by the reviewer regarding the framework investigated in this study, we find it crucial to highlight the practical significance of our attention model. Within transformers, the parameter $p$ corresponds to a trainable prompt [Lester et al. EMNLP’21] or the '[CLS]' token [Devlin et al. NAACL’19], as mentioned in [Oymak et al. ICML’23]. To comprehensively address the reviewer's concerns, we also provide new experiments that showcase the extensibility of our work. To this aim, we studied optimization dynamics of self-attention, as illustrated in the attached Figure 3, using the following objective
$
\quad L(W)=\frac{1}{n}\sum_{i=1}^n \ell(Y_i\cdot v^\top X^\top S(X_i W^\top x_{i1})) \quad $ (SA-ERM)
This corresponds to running linear classification on the first token output of a self-attention layer ($x_{i1}$). We recently discovered a slightly modified ATT-SVM can predict the implicit bias of Self-Attention
$\min_{W} \left\Vert W \right\Vert_F \quad \text{subject to} \quad ( x_{i\alpha_i}-x_{it})^\top W x_{i1} \geq 1 \quad \text{for all } \quad t\neq \alpha_i, i\in[n] \quad$ (S-ATT-SVM)
Fig 3 shows **self-attention solutions directionally align** with (S-ATT-SVM). Empirically, we observed that optimizing $W_{prod}$ biases gradient descent towards (S-ATT-SVM) with the Frobenius norm objective, while optimizing $(W_k, W_q)$ separately biases it towards (S-ATT-SVM) with the nuclear norm objective. In short, while self-attention introduces different behavior and deserves separate investigation, we believe the attention<->SVM connection introduced by our work is fundamental and remains valid.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I will slightly increase score.
---
Reply to Comment 1.1.1:
Comment: Your review of our paper is greatly appreciated. | Summary: The paper focusses on the optimization dynamics of attention mechanism. The authors analyze a softmax-attention model and demonstrate that running gradient descent on its parameters leads to a max margin solution, separating optimal tokens from non-optimal ones. The authors also present a regularization path analysis, demonstrating the convergence of solutions for nonlinear classifier heads. Overall, the paper aims to enhance the understanding of attention mechanisms and their optimization dynamics in large language models.
Strengths: 1. Comprehensive Characterization: The paper analyzes the fundamental attention model and its connection to max-margin problems. Analysis of this connection is quite original to the best of my knowledge. It overall advances the understanding of the attention mechanism from another theoretical perspective.
2. Convergence Insights: The paper looks into the convergence characterstics of gradient descent for tuning the token/prompt. This analysis can be used to drive futher improvements in optimization of large language models.
3. Joint Parameter Analysis: Through the analysis of the regularization paths, this work highlights the implicit biases and interactions between parameters (v, p), and describing their joint convergence.
4. Implications for Future Research: The work suggests promising avenues for future studies, such as exploring similar analysis for self-attention layers and multiple tunable tokens.
5. Real-World Relevance: Exhaustively understanding attention mechanisms in large language models is crucial for enhancing their performance in natural language processing tasks.
6. Numerical Validation: The authors provide empirical evidence supporting their theoretical findings through numerical experiments.
Weaknesses: 1. Lack of Concrete Examples: The paper could definitely benefit from providing more examples to illustrate the concepts. The findings of the paper are very abstract and make it hard for readers to grasp the implications.
2. Limited support from other works: While the paper thoroughly analyzes the attention mechanism from a max-margin problem perspective, it does not highlight directly if the claims align with other relvant theoretical analysis of attention mechanism.
3. Complexity of Analysis: The optimization-theoretic characterization might be challenging for readers without a strong background in the subject, making it less accessible to a broader audience.
4. Lack of application to Real-World Data: The paper leaves uncertainty about the applicability of the findings in practical attention based model design and applications.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can you provide more concrete examples illustrating the application of the findings to real-world LLM tasks?
2. How do the revealed implicit biases in the joint parameter analysis affect the interpretability and generalization capabilities of the attention model?
3. Can you elaborate on how the optimization-theoretic characterization and convergence insights presented in your work can be practically leveraged to enhance the training and fine-tuning of large language models?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The work is highly theoretical and does not present any potential negative societal impacts to the best of my knowledge.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your positive feedback and helpful suggestions.
> **Q1:** Can you provide more concrete examples/applications on real-world tasks?
**R:** Thank you for this suggestion. We have conducted new experiments using real data, demonstrating how our theory successfully predicts two important empirical phenomena related to the optimization dynamics of transformers and attention mechanisms:
- The attention map (i.e. softmax output) becomes more sparse over time by focusing on the most informative tokens.
- This is achieved by the norm of the attention weights ($W$) growing over time and leading to a "saturating" effect on softmax, resulting in a sparse pattern.
Our experiments in Figure 2 verify both of these predictions in line with our gradient descent convergence in Theorems 1&3. We train a vision transformer (ViT-base) model from scratch with the CIFAR-10 dataset for 400 epochs with a fixed learning rate $3\times 10^{-3}$.
- In Figure 2 (left), we present the progressive change in attention weights of the [CLS] token (which corresponds to our $p$ parameter) during training, computed from all attention heads within the model.
- In Figure 2 (right), we display the norm of attention weights and the sparsity level of attention maps averaged over all layers. We used $($L1norm/L2norm$)^2$ of the attention maps as a soft-sparsity measure, where a smaller value indicates a sparser vector.
Initially, during the early epochs of training, the attention weights are randomly distributed, leading to a dense pattern. However, as training progresses, the weights grow, causing the attention map to gradually become sparser. Consequently, the attention map starts to focus on fewer salient patches within the image that possess distinct features that aid in classification.
> **Q2:** How do the revealed implicit biases in the joint parameter analysis affect the interpretability and generalization capabilities of the attention model? Can you elaborate on how the optimization-theoretic characterization and convergence insights presented in your work can be practically leveraged to enhance the training and fine-tuning of large language models?
**R:** Our joint analysis in Section 3 has a surprisingly interpretable message: $p$ and $v$ (essentially) converge to their respective max-margin solutions, thus, optimization dynamics of “classification” (v) and “attention” (p) can be decoupled. Second, as also pointed out by Reviewer 8XUp, understanding implicit bias is essential across various domains, including interpretability, fairness, optimizer choice, and generalization bounds, because it connects complex optimization dynamics to amenable problems (like our attention SVM). Future works can study various aspects of transformers (TF) through the SVM lens. Below, we provide detailed discussion in the context of language models and how our optimization-theoretic characterization can enhance the training and fine-tuning of LLMs.
* **Fairness and Bias Mitigation**: Gradient descent is a fundamental optimization algorithm widely used in the training and fine-tuning of large language models. Language models trained using gradient descent can inherit biases present in the training data [[GPT-4] (https://arxiv.org/pdf/2303.08774.pdf)]. Understanding the implicit bias of gradient descent in this context allows researchers to identify and mitigate biases, ensuring fairness and ethical use of language models.
* **Generalization Bounds**: The implicit bias of gradient descent is closely tied to the generalization capabilities of trained models [[Vardi23](https://arxiv.org/abs/2208.12591)]. Understanding this relationship helps in establishing theoretical bounds on a model's generalization performance. Specifically, the implicit bias of gradient descent influences how well language models can apply their learned knowledge to new language tasks. Understanding this relationship helps in determining how effectively language models generalize to various language-related challenges.
* **Robustness and Regularization**: Implicit bias can influence the regularization properties of gradient descent. By understanding how the algorithm tends to favor certain solutions, we can develop regularization techniques that encourage better model generalization and robustness against noise and overfitting.
* **Algorithmic Choices**: Knowledge of the implicit bias of gradient descent helps in selecting appropriate optimization methods when training language models. Different algorithms exhibit varying biases, and understanding these nuances can guide the choice of optimization approach based on the desired behavior of the language model. Finally, it is also possible that one can develop new training algorithms: For instance, can we literally solve an SVM during training to accelerate TF optimization (e.g. after identifying which tokens to separate with SVM)?
In conclusion, characterizing implicit bias is paramount for multiple aspects and, by comprehending the interplay between optimization and these aspects, we can potentially enhance training and fine-tuning processes, leading to more principled, efficient, and trustworthy language models. | null | null | null | null |
Mixture Weight Estimation and Model Prediction in Multi-source Multi-target Domain Adaptation | Accept (poster) | Summary: This paper motivates the mixture weight estimation problem using Multi-source Multi-target Domain Adaptation problem. More specifically, this paper considers how to estimate the optimal mixture of sources, given a target domain; also, when there are multiple target domains, how to solve empirical risk minimization (ERM) for each target using a possibly unique mixture of data sources in a computationally efficient manner. This paper tackles both problems by constructing new efficient algorithms with a convergence guarantee.
Strengths: This paper provides a rigorous theoretical analysis of the optimization algorithm proposed in the paper and considers both offline and online settings for the problem.
Weaknesses: This paper focuses more on the theoretical analysis of a specific optimization problem instead of addressing the Multi-source Multi-target Domain Adaptation (M2DA) problem.
1. The bound in Theorem 1 can be quite loose; therefore, minimizing the right-hand side does not necessarily result in good weights in the target domain. As the right-hand side of this bound is minimized using empirical data, will there be an extra overfitting issue? Is there a way to bound this gap?
2. If some training data from the target domain is available, i.e., $\hat{\mathcal{T}}$, why not use these target data in the training of $h$ so that you have N+1 dimensional domain weights?
3. The relaxation from (1) to (2) seems arbitrary, especially the removal of the concave function of the square root.
4. There are no experimental results to verify the effectiveness of the proposed algorithm, not even on synthetic data.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I appreciate the theoretical contribution made in the paper by analyzing the convergence of the proposed stochastic corrected gradient descent ascent algorithm. However, the current way of presenting these results is nothing but motivating the specific convex-nonconcave minimax problem using a contrived application of M2DA, which seems to be quite off. I would suggest the authors revise the paper by focusing on the theoretical contribution made to the convex-nonconcave minimax problem considered in eq (2) and the co-component ERM problems in eq. (3), which are interesting in their own right. However, this may require a significant rewrite of the paper and even a change in the title.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: There are no experimental results to verify the effectiveness of the proposed algorithm, not even on synthetic data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **This paper is more like optimization not M2DA paper**
We are afraid that we have to respectfully disagree with you on this point. As we mentioned in global rebuttal, the mixing domain type of multi-source learning algorithm typically contains two parts: finding good mixing weights and solving ERM. Our primary goal is to find good mixing weights, and minimax algorithm is just the technique to achieve this. As for the second part of our paper, we consider a novel but very important setting, where there exist multiple target domains to adapt. This setting motivates a new theory topic, co-component ERM, and we provide two possible ways to solve this.
**Theorem 1 can be quite loose, and Eq. 1 uses empirical risk which will result in overfitting**
First, we would argue that Theorem 1 is not loose. It recovers the optimal generalization risk which is subroot in terms of number of samples. We agree with you that
Eq. 1 is an empirical estimation of RHS of Theorem 1, where we use the empirical risk to replace the population risk in Theorem 1. The difference between empirical risk and population risk can be bounded by Rademacher complexity. We drop the Rademacher complexity terms for two reasons: first, it scales as $\sqrt{\frac{1}{m_i}}$, subroot function of number of samples, which is already captured by the second term in Eq. 1. Then, computing Rademacher complexity is usually expensive. Since it is not a dominating term, we drop it for computationally convenience, following [KL19]. Adding the complexity term to our objective, it will still be a convex-nonconcave problem, and our algorithm 1 still works. At last, optimizing with empirical discrepancy is a very standard technique in domain adaptation fields, for example, see [MMA09], [BBCLPV10], [ZLLJ19].
**If some training data from the target domain is available, why not use target data as source as well?**
We agree with you that we can use target data as one additional source. Actually, our theory applies to this setting, by just changing the $N$ source to $N+1$ source, and the rest of the analysis still works.
**The relaxation from (1) to (2) seems arbitrary, especially the removal of the concave function of the square root.**
Indeed the square root term in (1) is convex in alpha, and we relaxed a convex object to strongly convex objective. This operation is quite standard in optimization, e.g., relaxing a l2 norm to squared l2 norm.
**No experiment**
As we mentioned in global answer, we provide experiments with two layer MLP on MNIST dataset in rebuttal pdf.
[KL19] N. Konstantinov and C. H. Lampert. Robust learning from untrusted sources. In Interna-
tional Conference on Machine Learing (ICML), pages 3488–3498. PMLR, 2019
[MMA09] Mansour, Yishay, Mehryar Mohri, and Afshin Rostamizadeh. "Domain adaptation: Learning bounds and algorithms." arXiv preprint arXiv:0902.3430 (2009).
[BBCLPV10] Ben-David, Shai, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. "A theory of learning from different domains." Machine learning 79 (2010): 151-175.
[ZLLJ19] Zhang, Yuchen, Tianle Liu, Mingsheng Long, and Michael Jordan. "Bridging theory and algorithm for domain adaptation." In International conference on machine learning, pp. 7404-7413. PMLR, 2019.
---
Rebuttal 2:
Comment: Dear Reviewer Ety2,
We want to thank you for your constructive suggestions and thoughtful reviews, which are valuable to improving our paper.
We understand that we are not supposed to bother reviewers, but as the deadline is approaching, as a follow-up on our rebuttal, we would like to kindly remind you that the close date of the discussion is approaching. We hope to use this open response window to discuss the paper, answer follow-up questions, and improve the quality of our paper. Have you gotten a chance to read our rebuttal, in which we tried our best to address your concerns? We want to make sure that you found our responses solid and convincing. Notice that we already provided some additional experiments results in our response to Reviewer 9T2K, and we would be more than happy to provide more information or clarification.
The authors
---
Rebuttal Comment 2.1:
Title: Thanks for the response
Comment: The rebuttal addresses my concerns about weaknesses 2 and 3. I would increase my score by 1.
If the goal of this paper is to solve M2DA, then the significance of the theoretical contributions will be discounted due to relaxations and simplifications, i.e., the analysis only focuses on the optimization of an upper bound. And the empirical results should demonstrate the effectiveness of the proposed method. The added experiments on MINIST are inspiring but not sufficient enough. More complicated datasets for multisource domain adaptation and something beyond two-layer NN should be added. I agree with reviewer 3rHn that
"A more focused way of presenting the paper would be to put the theory part with sufficient experiments in the main content while putting the rest of the theoretical results as an addition."
In summary, I think that the changes required for publication are too significant for me to recommend acceptance in this review process.
---
Reply to Comment 2.1.1:
Title: Additional experiments on Office dataset
Comment: Thanks for your feedback! We also appreciate your constructive suggestions, and we would like to address your remaining concerns as follows:
1. Optimizing to minimize the generalization bound is a natural and common technique to obtain a better model in domain adaptation. See the seminal works [1][2][3][4]. In all of the above works, they derived the generalization bound of domain adaptation, and minimize the right hand side of generalization bound to yield a good model.
2. We conduct more experiments on Office dataset, which is a widely-used domain adaptation dataset. The dataset contains three subdataset: amazon, webcam and dslr collected from different scenarios.
Domain generation:
| Group | Domain per group | samples per domain |
|-----------|-----------|-----------|
| Amazon | 5 |500 |
| Webcam |5 | 100 |
| dslr |5 |100 |
Results:
| | Target (Amazon) |Target (Webcam) |
|-----------|-----------|-----------|
| Average ERM | 86.25% | 83.75 % |
| Pure target training | 58.75% |42.5 %|
| Our method | **91.25%** |**90 %** |
Due to the time limit, we only post partial results. We will add more target setting, as well as ResNet results in revised version. Thanks for your suggestion again and hopefully it can further mitigate your concerns.
References:
[1] Ben-David, Shai, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. "A theory of learning from different domains." Machine learning 79 (2010): 151-175.
[2] Mansour, Yishay, Mehryar Mohri, and Afshin Rostamizadeh. "Domain adaptation: Learning bounds and algorithms." arXiv preprint arXiv:0902.3430 (2009).
[3] Zhang, Yuchen, Tianle Liu, Mingsheng Long, and Michael Jordan. "Bridging theory and algorithm for domain adaptation." In International conference on machine learning, pp. 7404-7413. PMLR, 2019.
[4] N. Konstantinov and C. H. Lampert. Robust learning from untrusted sources. In Interna- tional Conference on Machine Learing (ICML), pages 3488–3498. PMLR, 2019 | Summary: Authors formulate the problem of optimizing mixture weights given the target domain as a compositional convex-concave minimax optimization problem. Then, they propose a stochastic descent ascent algorithm for solving the problem, which improves upon previous method of [31] by allowing stochastic updates. Then, authors address the second problem of: given a large number of target domain distributions, how to efficiently find model weights for all of target distributions? This is discussed in both offline and online setting. In the offline setting, a two-layer ReLU neural network is trained to output model weights; the model is essentially a "hypernetwork" that outputs model weights. In the online setting, nonparametric online regression method is adapted for the problem.
Strengths: Authors identify an important instance of a convex-nonconcave minimax problem. The learning of mixture weights has become both theoretically and empirically important topic of research. For example, methods like Group DRO (Sagawa et al https://arxiv.org/abs/1911.08731 ) and DoReMi (Xie et al, https://arxiv.org/abs/2305.10429 ) have been drawing attention. The reduction of this learning problem to an abstract convex-nonconcave optimization problem will facilitate the adoption of techniques from a broader optimization literature.
Authors also push the state of the art on convex-nonconcave minimax problem by developing a stochastic version of the algorithm, adopting recently developed techniques such as stochastic corrected gradients.
Weaknesses: The first half of the paper (mixture weight optimization) and the second half (model weight prediction) are related, but the connection is not very strong. These two ideas could've made good two separate papers. While making these results a single paper made this paper very rich with technical content, but on the other hand, readers of the main body of the paper shall learn much less than what they would with two separate papers. The rationale of authors seems to be that mixture weights for the second problem could be found from the algorithm from first half, but in this case, we already know model weights; thus, the second problem is not very well motivated from the first problem.
There is no numerical experiments in this paper, and hence results in the paper are not numerically validated. Also, this makes the practical utility of proposed algorithms less clear.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The offline version of the algorithm (Section 3.1) is quite different from the online version of the algorithm (Section 3.2). Would it be meaningful to compare them against each other? For ex, the offline version of the algorithm could be applied to online setting by occasionally computing the label, and the online version of the algorithm could be applied to offline setting as a baseline?
In line 139, two relaxations in 139-144 are explained as standard. But within which literature are these standard? Can authors provide references for such relaxations?
Can stochastic versions of the Algorithm 2 exist (in terms of both $i$ and data points), which can alleviate the dependency on $M$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: What are alternative formulations for mixture weight estimation (1), and how would convex-nonconcave formulation compare against them? Practically, wouldn't it be too pessimistic to consider the supremum over $\mathcal{H}$?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments! We will try to address your concerns as follows.
**The connection between two parts of the paper**
We agree with you that the two parts of the paper already have their own independent interests. The reason we put them into one paper is that the two parts put together solved the multi-source multi-target domain adaptation problems, which is a complete story.
**No experiments**
As we mentioned in global answer, we provide experiments with two layer MLP on MNIST dataset in rebuttal pdf.
**The two relaxations**
Our first relaxation is relaxing absolute value by $\sqrt{x^2+c}$, which is used in non-smooth optimization paper, for example [CLY23]. Our second relaxation is similar to relaxing the l2 norm to squred l2 norm, which is widely used in practice, when we wish to regularize the norm of the model.
**Can stochastic versions of the Algorithm 2 exist (in terms of both
and data points), which can alleviate the dependency on M
?**
Thanks for raising this interesting point. We believe this is feasible, by enabling the sampling idea in the dynamic of Algorithm 2. We will seriously consider it as a promising follow-up work.
**Alternative formulations for mixture weight estimation.**
To our best knowledge, optimizing Eq.1 is the only mixture weight estimation method with theoretical support. However, we believe there must be other ways to optimize for mixture weights, which we leave as promising open problem.
**Would it be too pessimistic to consider the supremum over whole hyothesis class?**
The supremum over hypothesis class is used to achieve uniform convergence generalization bound, but we also notice that there are some localization technique, e.g., local Rademacher complexity that could enable us to do finer generalization analysis by studying the subset of hypotheses with small risk. In domain adaptation field, a localized discrepancy measure is proposed by zhang2020localized. We believe applying this idea to multi-source learning setting will be a very interesting open problem.
[CLY23] Chen, Xiaojun, Lingfeng Niu, and Yaxiang Yuan. "Optimality conditions and a smoothing trust region newton method for nonlipschitz optimization." SIAM Journal on Optimization 23.3 (2013): 1528-1552.
---
Rebuttal Comment 1.1:
Title: Thanks for answers
Comment: Thank you very much for thoughtful answers to my questions. These make sense, and I agree they would be better addressed in another subsequent paper. Good luck pursuing these directions.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: Thank you so much for your comments. We are truly grateful for your encouraging feedback and insightful suggestions! | Summary: This paper is about the multi-source multi-target domain adaptation problem. The authors formulate a minimax algorithm to find the mixture weights of source domains. Furthermore, the authors extend it to the scenario of multi-target domains and introduce the co-component ERM problem. For this problem, this paper proposes algorithms to efficiently solve co-component ERM problems, in offline and online fashions.
Strengths: This paper extensively and theoretically studied the domain adaptation problem from the minimax optimization perspective.
The paper gives the convergence analysis of the proposed algorithms.
For the multi-target domain scenario, the authors provide solutions to both the offline and online settings. The solution is more efficient than training each target domain adaptation independently.
Weaknesses: This paper is fully theoretical. The research focus of this paper, i.e., domain adaptation, has many open benchmark datasets and thus it would be better to experimentally evaluate the proposed methods.
In line 148, it is mentioned that a deterministic algorithm exists in the literature, while this paper is focused on the stochastic one. It would be good to elaborate more on the benefits of the stochastic one and meanwhile experimental compare them.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: As for the minimax optimization, in Eq.(1) and Eq.(2), why are the model/hypothesis parameters $\omega$ to maximize the difference between the target and source domain? Should it be the minimization?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: No potential negative societal impact is found in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments! We will try to address your concerns as follows.
**No experiments**
As we mentioned in global answer, we provide experiments with two layer MLP on MNIST dataset in rebuttal pdf.
**Comparison with deterministic convex-nonconcave optimization**
In Xu et al 2023, they achieves convergence rate of $O(\kappa_F^2/\epsilon^2)$ which is faster than ours $O(\kappa_F^4/\epsilon^4)$.
However, the problem of the deterministic algorithm is that, in practice it is too expensive to compute the full gradient, and hard to implement since we do not have enough memory to directly compute full-batch gradient.
**Why use maximization over hypothesis space in Eq.1 and Eq.2**
We use the maximization over hypothesis space is because our bound in Theorem 1 is a uniform convergence bound, which holds for every hypothesis in the class $\mathcal{H}$.
---
Rebuttal Comment 1.1:
Title: Thanks for answers. Score remained.
Comment: Thanks for the authors' answers and the added experimental evaluation, which all help better understand the paper!
I prefer to keep the score for the following reasons.
1. I agree with other reviewers that Sec. 2 and Sec.3 look a bit disconnected, and thus the paper is a bit overwhelmed by the theoretical results, while the motivation of the corresponding problem formulations looks weak.
A more focused way of presenting the paper would be to put the theory part with sufficient experiments in the main content while putting the rest of the theoretical results as an addition.
2. Though the authors provide the experimental evaluation during the rebuttal, considering several different problem setups proposed in this paper, it is still unclear whether a sufficient experimental evaluation is doable for all these problem setups, and also whether the experiments for these setups are realistic.
Meanwhile, since this paper is not the focus of my research areas, I suggest the AC considering the low confidence score of my review. | Summary: Authors propose a new way to compute mixture coefficients for combining multiple empirical risk minimization objectives (w.r.t. different sources in domain adaptation) in a way that takes into account the relation to a new target domain. As an application, the authors consider the multi-source multi-target domain adaptation scenario and solve it by predicting (the weights of) new target classifiers from the mixture weights.
Strengths: - The considered phase transition is interesting.
- The weights could be provided by a domain expert who is, e.g., certain about a physical relation between the domains.
- Convergence of the algorithm as extension of [31] is interesting.
Weaknesses: - No empirical intuition if the algorithm can be implemented with reasonable effort. I consider the result as of purely theoretical interest.
- The error in the computation of the weights $\alpha_i$ is not taken into account in the error rates results. This could dominate the convergence rates results.
- My impression is that, if we are able to solve Eq. (1) efficiently, then also a separate training for each target domain should give us a comperable accuracy. I don't see any argument, neither theoretical or practical, which guarantees that the proposed lagorithm improves the separate learning (which is possible with labels in the target domain).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - For computing $\alpha$ in Eq. (1), one needs to compute differences in empirical risks (on target vs. source datasets). Errors in this difference seem to aggregate to errors in the solution for $\alpha$. How does this effect the final prediction of the target model from mixture weights?
- In meta-learning there is the approach of representing novel domains by "meta-features" and then map these meta-features to the hyper-parameters of novel target domains. The hyper-parameters can also be model weights, as in your case given by $w^\ast(\alpha)$ cited in the introducion. Consequently, the theory of meta-learning should apply also to your setting. How does this theory, e.g. [1], compare to your convergence results? Can the same phase transition be observed assuming the mixture weights are exact?
[1] https://www.jmlr.org/papers/volume6/maurer05a/maurer05a.pdf
------------------------------
After rebuttal: My questions are addressed. I increase my score by 2.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: - Influence of error of weight estimation should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments! We will try to address your concerns as follows.
**No empirical intuition if the algorithm can be implemented with reasonable effort**
We implement our Algorithm 1 and provide results on MNIST dataset. It turns out the mixing parameter output by our algorithm yields a good model which can outperform naive ERM model or purely target domain learnt model.
**The error in the computation of the weights**
We assume you mean the error between the output alpha of Algorithm 1 and optimal solution to objective Eq.2. We cannot characterize this error since it is an convex-nonconcave problem, showing the convergence to global optimal point is NP-hard.
**The separate learning on target domain**
In practice, the target domain usually has very few labeled data, and hence learning solely on target domain will not yield a well-generalized model. Our experiment in PDF, Table.2 also validates this argument.
**How does errors in the difference of empirical risks affect alpha**
Maximizing the inner level of the difference of empirical risk is a non-concave problem, which cannot be exactly solved. Hence, we say that the whole problem of (1) is convex-nonconcave. Solving a convex-nonconcave minimax problem is NP-hard, so we can only provide a convergence to the stationary point. Since we cannot give a guarantee of the output alpha versus the global optimal alpha of (1), we cannot characterize the how good our output model is, compared to the model learnt under optimal alpha.
**Comparison to meta-learning paper of [1]**
Thank you for pointing out this interesting relationship.
[1] analyzes source aggregation scheme (CP-Regression) where feature vector for a new task is formed from predictions of models trained on source datasets.
Indeed, one could imagine obtaining predictor for N+1'th task using this approach.
That said, [1] only shows that the generalization gap, that is the difference between risk and empirical risk, can be controlled using such a scheme.
Therefore, their result does not say anything about how close such a solution is to the best possible on the target problem.
In fact, achieving the best possible performance on the target problem might require a very different algorithm: For instance, [KKS21] show lower bounds for the meta-learning in linear case, which suggest that aggregation as in CP-Regression is a suboptimal because it does not take into account task covariance.
In our work we design an algorithm (solving Eq. (1) + solving eq.\ in panel at the end of page 2) which in the best case achieves the best possible performance on target problem.
This is because our proposed algorithm is designed from the start to directly minimize the bound the gap between the risk and the best possible risk on the target task.
[KKS21] M. Konobeev, I. Kuzborskij, and Cs. Szepesv ́ari. A Distribution-dependent Analysis of Meta
Learning. In International Conference on Machine Learing (ICML), 2021
---
Rebuttal Comment 1.1:
Title: Thank you for your answer
Comment: I appreciate the answer of the authors. At the same time, I still have some concerns:
- The empirical evidence provided by the new experiment is vague. I think, a single split in several domains is too less to underpin the advantage of the algorithm compared to learning a single model on target.
- A pssible error made by sub-optimal weights can be incorporated assuming it is $\epsilon>0$. Does $\epsilon$ appear in the final error bound?
- I see there are many distinct insigths (empirical experiment, parts of the method are mathematically analysed). At the same time, I cannot see a final argument under which circumstances the proposed approach is better than learning a single model on the target domain.
---
Reply to Comment 1.1.1:
Comment: Many thanks for your comments. We will try to address your concerns as follows.
**More Empirical Comparison**
We totally agree that the initial empirical results, while demonstrating the effectiveness of estimating mixture parameters, does not show the advantage of the algorithm compared to learning a single model on target. This was mostly due to large number of samples in target domain which does not benefit much from source domains. Per your question, we conducted more experiments on MNIST. This time, we create 3 groups of domains, and each group has 5 domains. Each group's domains only draw data from a subset of 10 classes, and each domain has 100 training data. These 15 domains are treated as source domains. We consider four different target domains: i) a domain from group 1, ii) a domain from group 2, iii) a domain from group 3, and iv) a mixed domain whose data are sampled from both group 1 and group 2.
Data Generation:
| Group | Classes | Domain per group | samples per domain |
|-----------|-----------|-----------|-----------|
| 1 | 0,1,2 | 5 |100 |
| 2 | 3,4,5 |5 |100 |
| 3 | 6,7,8,9 |5 | 100 |
We run experiments on the four different target domain settings, and list the results below, and in all of them, our method outperforms purely target training and average domain training:
| | Target (Group 1) |Target (Group 2) | Target (Group 3) | Target (mixture of Group 1 and 2) |
|-----------|-----------|-----------|-----------|-----------|
| Average ERM | 69.9 % | 40.0 % | 34.9 %| 59.9 %|
| Pure target training | 69.9 % |55.0 %|40.0 %|55.0 %|
| Our method | **80.0 %** |**69.9 %** | **55.0 %** | **65.0 %** |
As it can be observed from above results, learning from source domains using learned mixture weights can improve accuracy on target domain significantly. Interestingly, in some cases (Group 3) learning with average ERM is worse than just training on target domain (due to heterogeneity among data sources, the naive averaging is not effective which necessities to weight each source based on it is relatedness to target domain which is the main motivation of our work).
**A possible error made by sub-optimal weights can be incorporated assuming it is epsilon. Does appear in the final error bound?**
We agree with you that Algorithm 1 can only output a sub-optimal weights, since the inner level of our objective is a nonconcave maximization problem. To characterize the distance of output and global optimal is intractable, since solving our convex-nonconcave problem is NP-hard, we can only show convergence to stationary point. If we pre-assume the error is epsilon, then it can be easily incorporated into our error bound result (we assume you mean Theorem 4) by simple error decomposition:
\begin{align*}
\mathbb{E}||h(\hat{\alpha}) - w^*(\alpha^*) ||^2 \leq 2\mathbb{E}||h(\hat{\alpha}) - w^*(\hat{\alpha}) ||^2+ 2\mathbb{E}||w^*(\hat{\alpha}) - w^*(\alpha^*) ||^2
\leq O(\kappa^2 d n^{-\frac{2}{2+N}}) + 2\kappa^2 \epsilon^2
\end{align*}
where we need to pay an extra error price in order of $\epsilon$. Notice that this does not change our conclusion about Phase Transition between the efficiency of learning to solve ERM and solving every ERM individually, since solving every ERM individually on the output weights $\hat{\alpha}$, this error will still exist if we try to compare $w^*(\hat{\alpha})$ and $w^*( {\alpha}^*)$.
Thank you so much for raising this question. We will clarify this in the revised version.
**When learning on mixed domain is better than purely target training**
As it is evident from the experimental results above, when the target domain has very few number of data, and there are other source domains that are similar to target domain, learning on the mixed domain will yield better solution than purely target training. When target domain already has much training data, and other source domains have significant divergence with target, then purely target training is the better choice. To see this, consider we have $N$ source domains, to be $D_1,...,D_{N/2} = D$ and $D_{N/2+1},...,D_{N} = -D$, where $-D$ means the distribution has the same margin distribution on $\mathcal{X}$ as $D$, but each data is labeled by opposite labeling function $-f(x)$. Also consider target domain as $D$, the same as first half $N$ sources, but only with very few data. In this case, simply learning on target data, or learning on the average of source domains will yield a very bad model, and the optimal option is to learn on the mix of first half $N/2$ source domains. These relevant sources can be automatically discovered by learning the mixture weights using our proposal. As indicated in our experiments, this enables us to learn a model based on relevant sources where the contribution of each source is proportional to its closeness (statistical discrepency) to target domain. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their time and constructive comments. We will gladly incorporate the suggestions.
We observe that reviewers have two primary concerns: the consistency of the story and the lack of empirical evaluation, which we will try to address as follows.
**The consistency of the story** How to efficiently and effectively learn from multiple source is a longstanding problem in domain adaptation. Among the effort to solve this problem, the most popular and classic method is to learn from the mixture of these sources [KL19, MMR+21], due to its simplicity to implement, and great theoretical research value. The mixture based multi-source learning method typically contains two phases: **Phase I**, finding the 'good' mixture weight, and **Phase II**, performing ERM on the mixed domains.
Our work is exactly aimed at proposing an efficient and effective multi-source learning algorithm, by solving the two phases of problem sequentially. The first part of our work provides the first provable algorithm for finding good mixture weight. As a side contribution, from a technical perspective, it has its own value in stochastic and compositional convex-nonconcave problem.
The second part of our work addresses the **Phase II** problem, how to efficiently perform ERM on the mixed domains, when there exists multiple target domains. We cast this problem as *Co-component Empirical Risk Minimization* problem. This problem is never studied before, and we provide two possible provable ways, learning Lipschitz function with two-layer neural network and label efficient online learning approach. We believe there will be many other potential methods, and this paper is an initiative work to inspire the follow-up works.
**The experiments** To demonstrate the effective of our Algorithm 1, we implement it and run experiments on MNIST dataset, with two layer MLP model.
*Data generation*: We constructed three distinct groups comprising a total of 10 source domains. The target domain shares the same class distribution as Group 1. The splitting and generation for MNIST non-IID data, including the classes within each group, the number of domains, and the samples per domain, are detailed in Table.1.
*Convergence of mixing parameter*: In Figure.1 (a), we observe that the alpha values for source domains in Group 1 converge to 0.5. This indicates that these domains have positive transfer to target domain. On the other hand, other source domains, which lack overlapping classes with the target domain, have alpha values of zero. This suggests that they provide no contribution during the training process of the target domain. Meanwhile, Figure.1 (b) shows the final alpha values of each source domain at the end of the training process.
*Effectiveness of the learnt mixing parameter*: We evaluated the accuracy using three distinct Error Risk Minimization methods: 1. Weighted ERM using our learnt weights, 2. ERM on averaged loss and 3. ERM solely on target domain, and presented our findings in Table.2. The results indicate that the accuracy achieved using the learned alphas outperforms the other two approaches. Additionally, the accuracy comparisons during training between Learned Alpha and Average Weight are plotted in Figure.2.
[MMR+21] Y. Mansour, M. Mohri, J. Ro, A. T. Suresh, and K. Wu. A theory of multiple-source
adaptation with limited target labeled data. In Arindam Banerjee and Kenji Fukumizu,
editors, International Conference on Artificial Intelligence and Statistics (AISTATS), volume
130 of Proceedings of Machine Learning Research, pages 2332–2340. PMLR, 13–15 Apr 2021.
[KL19] N. Konstantinov and C. H. Lampert. Robust learning from untrusted sources. In Interna-
tional Conference on Machine Learing (ICML), pages 3488–3498. PMLR, 2019
Pdf: /pdf/ac3aec9a7c74c9c15929f97ee8a277cd304ca398.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary:
Summary:
The paper addresses the problem of multi-source multi-target domain adaptation, where the goal is to learn a model from multiple sources in such a way that it performs well on a new target distribution. The context for this problem includes scenarios like learning from data collected from various sources (e.g., crowdsourcing) or in distributed systems with highly heterogeneous data. The two main unsolved problems in this context are: 1) how to estimate the optimal mixture of sources for a given target domain, and 2) how to efficiently solve empirical risk minimization for each target domain when there are numerous target domains, which can be computationally expensive. The paper proposes solutions to both of these problems using convex-nonconcave compositional minimax and overparameterized neural networks with provable guarantees. Additionally, an online algorithm for predicting parameters for new models given mixing coefficients is proposed.
Strengths: Theoretical Contributions: The paper proposes novel approaches to tackle the problems of mixture weight estimation and empirical risk minimization, utilizing convex-nonconcave compositional minimax and overparameterized neural networks, respectively. These contributions are supported with provable guarantees, which add rigor to the proposed methods.
Efficiency and Scalability: The paper emphasizes the efficiency of the proposed algorithms, particularly for mixture weight estimation and empirical risk minimization for multiple target domains. The avoidance of individual ERM for each target domain in certain cases helps reduce computational overhead.
Weaknesses:
Complexity: The proposed methods, such as convex-nonconcave compositional minimax and overparameterized neural networks, might be complex and difficult to implement for practitioners who are not familiar with these advanced techniques.
Applicability to All Domains: The paper may not clearly address the limitations or specific domains where the proposed techniques might not be directly applicable or might require additional adjustments.
Empirical Evaluation: The paper lacks details about empirical evaluations, such as experiments on real datasets or comparisons with other state-of-the-art methods. This could raise concerns about the practical effectiveness of the proposed algorithms.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How does the proposed convex-nonconcave compositional minimax approach differ from existing methods used for mixture weight estimation?
Can you provide more details on the theoretical guarantees of the proposed overparameterized neural network approach for empirical risk minimization and its relationship to the specific problem context?
Are there any assumptions made about the data distribution or source/target domains that could limit the generalizability of the proposed methods?
How does the proposed method perform in practice?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limited Empirical Validation: The lack of empirical evaluation or real-world case studies might raise questions about the practical effectiveness and applicability of the proposed methods to real-world scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments! We will try to address your concerns as follows.
**Complexity to implement algorithm**
Our mixture weight estimation algorithm is a single loop primal-dual algorithm, which is widely used in minimax optimization and easy to implement. The only additional effort is to implement correction step for compositional term, which is also easy-implementable by maintaining an auxilliary variable. In our provided PDF, we implement the algorithm and show the convergence to a desired weights (Figure.1 in PDF). As for the algorithm 2, it is indeed a very common neural network model training, and can be implemented by few lines of Pytorch code.
**Applicability to All Domains**
Our algorithm can automatically find the good mixture weights, given any source domains. Hence, our algorithm can work well in any multi-source learning scenario. Our provided experiments also validate the effectiveness of our algorithm.
**Empirical Evaluation** As we mentioned in global rebuttal, we provide the experiments in the rebuttal pdf.
**Comparison with existing methods used for mixture weight estimation**
[KL19] also propose to optimize similar objective as ours to get mixture weight, but they do not give a practical algorithm, nor the rigorous convergence guarantee. [MMR+21] propose to sample many mixture weights on simplex, then do ERM to get multiple candidate target models, and pick the best one who has the smallest target empirical risk as final solution. The drawback of their method is, when the dimension of the simplex is very high, you may need to sample exponentially many weights to cover the whole simplex to some accuracy.
**Can you provide more details on the theoretical guarantees of the proposed overparameterized neural network approach for empirical risk minimization and its relationship to the specific problem context?**
Guarantee for minimizing empirical risk of an overparameterized neural network (Theorem 3) is based on a standard Neural Tangent Kernel (NTK) approximation argument [JGH18, BMR21], namely we use the key fact that predictions made by a GD-trained overparameterized neural network are close to those made by a Kernel Least-Squares (KLS) predictor (given that the width of the network is polynomial in $n/d$).
The core idea has been recently explored in many papers, for instance, [DLL+19, OS20, BMR21].
In addition to that, in our work we need to establish that such neural networks are able to learn Lipschitz vector-valued target functions with iteratively refined labels -- which in the context of our paper means that given a mixture weight, a GD-trained neural network can output parameters for a target task.
Here we extend their proof to vector-valued functions and inexact label observation setting, by analyzing the dynamic of a bi-level optimization.
**Are there any assumptions made about the data distribution or source/target domains that could limit the generalizability of the proposed methods?**
We do not make any assumption on data distribution. Our proposed mixture weight estimation algorithm can automatically adapt to different heterogeneity level, since it takes the distributions discrepancy into account.
[BMR21] P. L. Bartlett, A. Montanari, and A. Rakhlin. Deep learning: a statistical viewpoint. Acta
Numerica, 2021.
[DLL+19] Simon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds
global minima of deep neural networks. In International Conference on Machine Learning,
pages 1675–1685. PMLR, 2019.
[JGH18] A. Jacot, F. Gabriel, and C. Hongler. Neural tangent kernel: convergence and generalization
in neural networks. In Conference on Neural Information Processing Systems, 2018.
[KKS21] M. Konobeev, I. Kuzborskij, and Cs. Szepesv ́ari. A Distribution-dependent Analysis of Meta
Learning. In International Conference on Machine Learing (ICML), 2021.
[KL19] N. Konstantinov and C. H. Lampert. Robust learning from untrusted sources. In Interna-
tional Conference on Machine Learing (ICML), pages 3488–3498. PMLR, 2019.
[KS22] I. Kuzborskij and Cs. Szepesv ́ari. Learning lipschitz functions by gd-trained shallow overpa-
rameterized relu neural networks. arXiv:2212.13848, 2022.
[MJ05] A. Maurer and T. Jaakkola. Algorithmic stability and meta-learning. Journal of Machine
Learning Research, 6(6), 2005.
[MMR+21] Y. Mansour, M. Mohri, J. Ro, A. T. Suresh, and K. Wu. A theory of multiple-source
adaptation with limited target labeled data. In Arindam Banerjee and Kenji Fukumizu,
editors, International Conference on Artificial Intelligence and Statistics (AISTATS), volume
130 of Proceedings of Machine Learning Research, pages 2332–2340. PMLR, 13–15 Apr 2021.
[OS20] S. Oymak and M. Soltanolkotabi. Toward moderate overparameterization: Global conver-
gence guarantees for training shallow neural networks. IEEE Journal on Selected Areas in
Information Theory, 1(1):84–105, 2020.
---
Rebuttal 2:
Comment: Dear Reviewer u3ZP,
We want to thank you for your constructive suggestions and thoughtful reviews, which are valuable to improving our paper.
We understand that we are not supposed to bother reviewers, but as the deadline is approaching, as a follow-up on our rebuttal, we would like to kindly remind you that the close date of the discussion is approaching. We hope to use this open response window to discuss the paper, answer follow-up questions, and improve the quality of our paper. Have you gotten a chance to read our rebuttal, in which we tried our best to address your concerns? We want to make sure that you found our responses solid and convincing. Notice that we already provided some additional experiments results in our response to Reviewer 9T2K, and we would be more than happy to provide more information or clarification.
The authors | null | null | null | null | null | null |
Rank-DETR for High Quality Object Detection | Accept (poster) | Summary: This paper focuses on the ranking problem in object detection. Inspired by the misalignment between class and location scores in DTER models, the paper proposes to redesign the model architecture and the loss to modulate the rank information. Experiments show that , the proposed method can build upon the current strong DETR detectors such as DINO and H-DETR to further boost the performance by a notable margin (>1 mAP). The experiments also suggest the complementarity between the ranking model architecture and the ranking loss design.
Strengths: + The paper tackles an important problem in DETRs for object detection: the misalignment of the class scores and the localization scores.
+ The paper is well written and easy to follow. The paper has provided informative statistical plots. For example, the density distribution for QRL and HWC, and the illustration of box scores with and without the proposed ranking mechanism.
+ The experiments are solid and sound. The paper uses detailed ablation studies to justify the design choices of the proposed components. The method seems effective according to the experimental results, i.e., around +1% AP metric for DINO and H-DETR.
Weaknesses: - The whole process seems a little engineering. E.g., the addition of learnable bias.
- The main boost originates from the GIoU-aware loss, which has been investigated in Varifocal loss, Align-DETR and Stable-DETR (I am aware that the latter two are not formally published). This somehow hurts the novelty of the paper. A thorough discussion/comparison is expected.
- The method seems complicated and includes many steps. For example the rank-driven model design seems not to invite as much performance gains as expected.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The title is too broad and has little meaning. I recommend to change a name including the technical contribution.
- The addition of the RCH module even hurts the performance, especially for AP_L, it degrades 0.6 points. Please explain this phenomenon.
- The learnable logit bias vectors S and the randomly initialized content query C are confusing to me. Are they tricks or can the author give an explanation (or illustrations with figures) for their practical functions? I think further ablations on the two learnable bias are required.
- The result for DINO, R50, 12epochs in Table 1 is different from that in Table 3. Please check.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The current solution is somewhat straightforward, and the topic can be deep investigated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## To Reviewer kDha
Thanks for your detailed comments. The mentioned questions are addressed as follows.
---
> **The whole process seems a little engineering. E.g., the addition of learnable bias.**
A: We appreciate your feedback and are grateful for the opportunity to address your concern regarding our work is engineering. We agree some of the proposed designs essentially require engineering techniques and they are important to build high quality object detectors. We also would like to further simply the engineering design in the short future.
---
> **The main boost originates from the GIoU-aware loss, which has been investigated in Varifocal loss, Align-DETR and Stable-DETR (I am aware that the latter two are not formally published). This somehow hurts the novelty of the paper. A thorough discussion/comparison is expected.**
A: Great point! First, we want to clarify that the other proposed components also bring significant improvements. For example, according to Table 4(a), RCH improves AP from 48.2 to 48.4 (+0.2), QRL improves AP from 48.4 to 49.1 (+0.7), GCL (GIoU-aware loss) improves AP from 49.1 to 49.6 (+0.5), and HMC improves AP from 49.6 to 50.0 (+0.4). Therefore, we can see that the proposed GIoU-aware loss contributes less than 30% of the overall gains. A thorough discussion/comparison with Align-DETR and Stable-DINO is provided in the general response.
---
> **The method seems complicated and includes many steps. For example the rank-driven model design seems not to invite as much performance gains as expected.**
A: First, we want to clarify that the proposed method is easy to implement. RCH and QRL can be implemented with fewer than 100 lines of code, while GCL is a simple modification of Focal Loss that changes the hard label into a soft GIoU target. HMC is a simplified version of an existing matching cost design, which reduces the conventional weighted sum of three costs to just one term.
Second, although we propose four components, it is a comprehensive design that introduces ranking into DETRs. It includes both architectural design and optimization modifications.
Third, the proposed components all facilitate better ranking. RCH and QRL embed ranking information in the network architecture. GCL and HMC make the ranking metric (classification score) IoU/GIoU aware and further improve the quality of ranking.
---
> **The title is too broad and has little meaning. I recommend to change a name including the technical contribution.**
A: Good point! We appreciate your valuable suggestions on including the technical contribution in the title. The technical contribution of this work lies in introducing rank-oriented architecture designs and rank-oriented optimization designs (training loss and matching cost). Therefore, we propose a possible candidate: “Rank-DETR: Improving DETR with Rank-Oriented Architecture and Loss Designs”. We would appreciate any further valuable comments you may have.
---
> **The addition of the RCH module even hurts the performance, especially for AP_L, it degrades 0.6 points. Please explain this phenomenon.**
A: In our observation, large objects tend to have a high classification score in DETRs, while the small ones do not. The proposed RCH tends to enhance the classification score of small objects. As a result, the average precision (AP) for small objects is improved, while AP_L is affected negatively. Overall, RCH still increases the overall performance in terms of the mean average precision (mAP) metric.
---
> **The learnable logit bias vectors S and the randomly initialized content query C are confusing to me. Are they tricks or can the author give an explanation (or illustrations with figures) for their practical functions? I think further ablations on the two learnable bias are required.**
A: The function of rank-aware static content query C has been analyzed in Fig. 3(b) and (c). Fig. 3(b) shows that by embedding C in the sorted content queries O, the classification scores of positive queries (matched in the Hungarian matching process) are enhanced. Fig. 3(c) illustrates that the score of negative queries (unmatched ones) is suppressed. Therefore, QRL ensures lower false positives (FP) and false negatives (FN).
We further visualize the learnable logit bias vectors S for each decoder layer in the **attached PDF in the general response** (Fig. 6 and Fig. 7). For each decoder layer, the logit bias is a tensor with a shape of [num\_queries, num\_classes]. We use the index of queries (which has been sorted) as the x-axis and the processed value of logits (mean for Fig. 6, max for Fig. 7) for the y-axis. It can be observed that the learned logit bias further enhances the higher-ranked logits and suppresses the lower-ranked ones.
---
> **The result for DINO, R50, 12epochs in Table 1 is different from that in Table 3. Please check.**
A: Good point! The results in Table 1 are the numbers reported in the original paper. The results in Table 3 were reproduced by our server for a fair comparison. Our reproduced performance of DINO-DETR is slightly worse than the reported numbers in the original paper. We appreciate your suggestion and would like to explicitly state that the numbers in Table 3 were reproduced by us based on your valuable comments.
---
Rebuttal Comment 1.1:
Title: Thanks the authors for the rebuttal
Comment: I have read the rebuttal and other reviews. The rebuttal has addressed my concerns sufficiently. I would like to keey my original rating as "borderline accept". | Summary: In this paper, the authors study the problem of object detection. To be specific, they introduce rank-awareness into transformer-based detectors both at the architecture-level and the loss/cost-level.
After the rebuttal:
The authors have addressed my concerns about the comparison with other ranking-based solutions. Therefore, I recommend the paper to be accepted.
Strengths: 1. Ranking is an important problem in object detection.
2. The paper introduces and incorporates many interesting ways of integrating ranking into transformer-based detectors.
3. Strong results compared to existing methods.
Weaknesses: 1. The paper is missing citations and comparison to significant ranking-based object detectors. To name a few, AP Loss, aLRP Loss, Rank & Sort Loss, Correlation Loss.
2. "Rank-adaptive Classification Head" => What does it have to do with ranking? This module just adds a learnable bias to classification scores, which do not depend on negatives or positives or localization qualities.
3. Some aspects require clarifications:
3.1. Eq 2: How does updating the positional encodings based on the sorted boxes disrupt positional information of the boxes?
3.2. Eq 2 & 3: Do you not need Sort() to be differentiable to pass gradients through?
3.3. "High-order Matching Cost" => Why high-order? What is high and what is order in this context?
4. Improvement over Varifocal loss (0.3% mAP) is insignificant. It is not clear what was missing in existing ranking-based loss functions.
5. Presumably there was not sufficient time to compare against [1, 21]. If you have performed a comparison in the meantime, can you please share the results?
6. It is a pity that there are no results on the LVIS dataset.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## To Reviewer vAjC
---
> **Missing citations and comparison to significant ranking-based object detectors.**
A: Thanks for sharing so many valuable ranking-based object detectors which surely will help us to improve our work! We would like include the missing citations and the following comparisons in the revision.
👉 First, we discuss their mathmatical formulation differences as follows:
- AP Loss: $L_{AP} = \frac{1}{|P|} \sum_{i\in P} \sum_{j\in N}L_{ij}$, where the AP loss aims to address the extreme foreground background class imbalance issue.
- aLRP Loss: $L^{aLRP}=\frac{1}{|P|}\sum_{i\in P}\ell^{\mathrm{LRP}}(i)$, where the aLRP loss is the first ranking based loss function for both classification and localisation tasks.
- Rank and Sort Loss: $L_{RS} := \frac{1}{|P|} \sum_{i\in \mathcal{P}} (\ell_{\mathrm{RS}}(i) - \ell_{\mathrm{RS}}^*(i))$, where the Rank and Sort Loss aims to rank each positive above all negatives as well as to sort positives among themselves with respect to their localisation qualities.
- Correlation Loss: $L_{corr} =1 − \rho(IoU,s)$, where the Correlation Loss is a simple plug-in loss function to improve correlation of classification and localization tasks.
According to the above formulations, we can see that AP Loss, aLRP Loss, and Rank Sort Loss mainly constrain the classification and localization predictions between pair-wise samples (positive sample pairs or positive-negative sample pairs), using error-driven update. However, our loss function use plain backpropagation to align each classification score with its target, but not between each pair of classification scores. In addition, correlation loss is a plug-in component used together with the classification loss, while our loss function replaces the original classification loss.
👉 Second, we also attempt to integrate these methods with the H-DETR method and report the initial comparison results as follows:
| method | mAP | AP50 | AP75|
| ---- | --- | ---- | ----- |
| H-DETR (reproduced baseline) | 48.2 | 66.4 | 52.9 |
| H-DETR + Correlation Loss | 48.9 (+0.7) | 65.7 | 53.3 |
| H-DETR + Ours | 49.2 (+1.0) | 66.9 | 53.7 |
Also, other methods (AP, aLRP, RankSort) result in slight mAP degradations, and our method significantly outperforms these ranking losses based on H-DETR. This results from the choice of the **matching strategy** between the detectors used in these papers and DETR. We would like to include above discussions in the final revision and welcome any further suggestions.
---
> **Ranking in RCH.**
A: The input $\mathbf{o}^l_i$ for Equation $\mathbf{t}^l_i=\operatorname{MLP}(\mathbf{o}^l_i)$ is
computed based on the sorted content query $\overline{\mathcal{Q}}_c^{l-1}$ and sorted position query $\overline{\mathcal{Q}}_p^{l-1}$ from the previous decoder layer.
Therefore, each element of the learnable bias corresponds to a specific ranking position according to the descending order of the classification scores predicted by the previous decoder layer.
---
> **clarifications**
A:
👉 Eq 2: How does updating the positional encodings based on the sorted boxes disrupt positional information of the boxes?
We update the positional encodings based on the sorted boxes in order to **keep the order of the positional embedding aligned with the order of the ranked content query** as we need to send the combination of rank-aware positional query embedding and content query embedding into the subsequent Transformer decoder layers.
👉 Eq 2 & 3: Do you not need Sort() to be differentiable to pass gradients through?
Yes, we implement the Sort() function in Eq 2 & 3 with the combination of two functions including torch.argsort() and torch.gather(). First, we use torch.argsort() to get the indices of the sorted probability predictions $\mathcal{P}^{l}$ and torch.argsort() is not differentiable. Second, we use torch.gather() to gather the values of $\mathcal{B}^{l}$ and $\mathcal{O}^{l}$ according to the output of torch.argsort(). The operation torch.gather() is differentiable to pass gradients through.
👉 "High-order Matching Cost" => Why high-order? What is high and what is order in this context?
(i) The reason for using a high-order combination of the class scores and the IoU is:
- **non-linear relationships**: using high-order combinations allows us to capture non-linear relationships between the classification scores and the IoU scores, leading to better model performance and a more accurate representation of the underlying patterns in the data.
- **better representation of interaction effects**: high-order combinations can capture interaction effects between two variables more effectively than linear combinations. We can use different power terms for them to control the effect of each variable more flexibly.
(ii) In this work, we use high-order to refer to non-linear combinations of the classification scores and the IoU scores, i.e., $\hat{\mathbf{p}}[l] \cdot {\text{IoU}}^{\alpha}$, as shown in Equation (7).
---
> **Varifocal loss**
A: Varifocal loss **removes the scaling factors on positive examples**. According to original paper, the reason for such a design is because **positive examples are extremely rare compared with negatives and we should keep their precious learning signals**. However, this assumption is not true for DETR-based detectors, where **the ratio of positive examples is relatively much larger** (e.g., 30-50 positive samples vs. 250-270 negative samples given 300 queries). Therefore, we empirically show that applying a scaling factor ($-|t-\hat{\mathbf{p}}[l]|^\gamma$) to positive samples can help improve the performance.
---
> **Comparison with Aligh-DETR, Stable-DINO**
A: Please refer to the general response.
---
> **For LVIS dataset**
A: We provide the results on LVIS based on Detic with Deformable-DETR + R50 (https://github.com/HDETR/H-Detic-LVIS) trained for 24 epochs:
| method | mAP |
| -| - |
| Detic | 30.9 |
| Rank Detic | 34.1 |
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thank you for the detailed rebuttal. I am happy with the responses and the new results. Therefore, I will keep my original recommendation as Accept.
---
Reply to Comment 1.1.1:
Title: Thanks for the Response of Reviewer vAjC
Comment: We thank the reviewer for your prompt response and for keeping the positive rating.
We intend to incorporate the rebuttal contents into the final revision in line with your invaluable suggestions. Your guidance has been instrumental in enhancing the quality of our work, and we truly appreciate your dedication to this process. | Summary: This paper proposes a DETR training method named Rank DETR that integrates multiple (four) rank-oriented designs, i.e., rank-adaptive classification head (RCH), query ranking layer (QRL), GIoU-aware classification loss (GCL), and high-order matching cost (HMC). Among these four components, the former two (RCH and QRL) are relatively novel, while the latter two (GCL and HMC) are close to already-existing IoU-based loss function and matching criterion (e.g., as in Stable DINO). Experimental results show that the proposed Rank DETR improves multiple DETR baselines. However, some recent methods (e.g., Stable DINO) with fewer components achieve comparable or even higher results.
Strengths: - Two (out of four) major components, i.e., rank-adaptive classification head (RCH), query ranking layer (QRL) are novel.
- Ablation experiments show that most components bring considerable improvement (the improvement from RCH is relatively trivial) and integrating them brings further improvement.
Weaknesses: - Two (out of four) major components, i.e., GIoU-aware classification loss (GCL), and high-order matching cost (HMC), share close insight, motivation and mechanism with recent methods, e.g., Stable DINO and Aligned DETR (though the detailed implementation is different). Moreover, the improvement brought by these two components is smaller than the similar ones in the competing methods.
- More importantly, though the proposed Rank DETR adds two more components (RCH and QRL) based on IoU-related loss and matching (GCL and HMC), the overall results of Rank DETR is still lower than the recent methods that only has IoU-related loss (and matching), i.e., Stable DINO. For example, with DINO baseline, the proposed Rank DETR achieves 49.6 AP (12 epochs), while Stable DINO achieves 50.4 AP (12 epochs).
- The definition of ranking in DETR is not clear enough. What criterion is the ranking based on? The IoU or the predicted confidence. This should be clearly pointed out at the very beginning.
- The relation between the proposed method and IoU-related methods is not clear enough. Section 2 should explain their differences against IoU-related methods, as well as connections, in more details.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - Eqn. 2 is confusing. What does Sort(A, B) perform? Sorting B and duplicate the sorting results onto A?
- In Eqn. 2, positional embedding is irrelevant to the rank of a predicted results, but determined by the coordinates. Since the ranking (sorting) does not change the coordinates, how does the ranking impact the positional embedding? If there is no impact, why do you enforce ranking in Eqn. 2?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## To Reviewer JDo7
We thank the reviewer for the careful reviews and constructive suggestions. We answer the questions as follows.
> **"Two (out of four) major components ..."**
A:
👉 First, we summarize their mathematical formulations as follows:
| method | classification loss modification | matching cost modification |
| :------------- | :------------------: | :-------------: |
| Stable-DINO | change the classification target for positive query to: $\mathrm{IoU}^2$ | modulate the classification prediction with GIoU scores: $\mathrm{p}\times(\frac{\mathrm{GIoU}+1}{2})^{0.5}$ |
| Align-DETR | change the classification target for positive query to:$\mathrm{p}^{0.25}\times\mathrm{IoU}^{0.75}$ | no change |
| Rank DETR | change the classification target for positive query to: $\frac{\mathrm{GIoU}+1}{2}$ | replace the original one with $\mathrm{p} \cdot {\text{IoU}}^{4}$ |
According to the above formulations, we can see that the key difference is at the modification to the original matching cost, which consists of classification cost, $\ell_1$ loss, and GIoU loss.
👉 Second, we agree that the improvement brought by these two components seem smaller than the similar ones in the competing methods when using the DINO-DETR as the baseline. We attempt to explain the possible reasons in the general response.
---
> **"More importantly, though the ..."**
A: Great point!
👉 First, we want to clarify that **the mentioned the 50.4 AP (12 epochs) of Stable-DINO relies on using another two techniques including: (i) NMS and (ii) Memory fusion**, where the results are summarized in Table 6 of Stable-DINO paper and these two techniques bring additional +0.4 AP gains. Therefore, **the actual improvement brought by the IoU-related loss (and matching) in Stable-DINO is 1.0 AP gain**.
👉 Second, we have provided more comparison results in the general response. Align-DETR and Stable-DINO seem to use an improved DINO-DETR baseline with AP=49.4 (vs. ours: AP=48.7).
---
> **"The definition of ranking in DETR ..."**
A: Thanks for pointing out this issue and we address your concerns as follows:
👉 **Ranking definition**: ranking in DETR represents **the ranking order of the object query (and the associated bounding box predictions)**. The ranking order of object query is impoxrtant because the final predictions of modern DETR methods (H-DETR, DINO-DETR) are generated by sorting the bounding box predictions based on the classification confidence scores in descending order by default.
👉 **Criterion of ranking**: the modern DETR methods (H-DETR, DINO-DETR) choose the default classification scores (agnostic to the localization precision) as the ranking criterion. In this work, we choose the **GIoU-aware classification scores** (predicted by the classification head) as the ranking criterion. These GIoU-aware classification scores of the positive samples are supervised by its normalized GIoU scores ($(\mathrm{GIoU}(\hat{\mathbf{b}}, \mathbf{b}) + 1)/2$) with the matched ground-truth box.
---
> **"The relation between the proposed ..."**
A: Thanks for your valuable suggestions! We would like to include the following discussion in the final revision following your comments. We address your concerns on the connections and differences between our method and the IoU-related methods as follows:
👉 Connections: the most significant connection exists between the IoU-related methods and our work, alongside Stable-DINO and Align-DETR. A pivotal similarity lies in our shared objective, which centers around **addressing the mis-alignment between classification scores and localization accuracy.**
👉 Differences:
- **Designed for different detectors**: the previous IoU-related methods focus on improving the ranking scheme for conventional object detectors such as FPN, Cascade R-CNN, FCOS, and ATSS. Our work focuses on improving the ranking design for modern DETR-based object detecters considering H-DETR/DINO-DETR already achieves stronger results.
- **DETR-oriented innovations**: our innovation lies in the domain of DETR-specific designs, specifically our pioneering rank-oriented architecture for DETR methods. These elements are ingeniously crafted to cater to the intricate task of investigating the ranking order of object queries within DETR-based object detectors.
---
> **"Eqn. 2 is confusing..."**
A: Thanks for pointing out this issue! Sort(A, B) means sorting the elements within A according to the decreasing order of the elements within B. In the Eqn. 2, we use $\operatorname{Sort}({\mathcal{B}}^{l}, \mathcal{P}^{l})$ to represent sorting the order of bounding box predictions $\mathcal{B}^{l}$ according to the decreasing order of their corresponding classification scores $\mathcal{P}^{l}$.
We implement the Sort() function with the combination of two functions including torch.argsort() and torch.gather(). First, we use torch.argsort() to get the indices of the sorted probability predictions $\mathcal{P}^{l}$ and torch.argsort() is not differentiable. Second, we use torch.gather() to gather the values of $\mathcal{B}^{l}$ according to the output of torch.argsort().
---
> **"In Eqn. 2, positional embedding ..."**
A: (i) Yes, positional embedding is determined by the coordinates (bounding box predictions). (ii) The ranking of the positional embedding essentially places the bounding box predictions associated with higher GIoU-aware classification scores in the top-ranked positions. (iii) The reason for enforcing ranking in Eqn. 2 is to **keep the order of the positional embedding aligned with the order of the ranked content query** (shown in Eqn. 3) as we need to send the combination of rank-aware positional query embedding and content query embedding into the subsequent Transformer decoder layers.
---
Rebuttal Comment 1.1:
Title: Looking forward to hearing the feedback from JDo7
Comment: We sincerely value your dedicated guidance in helping us enhance our work. We are eager to ascertain whether our responses adequately address your primary concerns, particularly in relation to the comparisons with Stable DINO and Aligned DETR. We would be grateful for the opportunity to provide any needed further feedback.
---
Rebuttal Comment 1.2:
Title: Thanks for the authors' rebuttal
Comment: The authors have clarified most unclear statements and explained their differences and connections with recent IoU-based DETR methods. Though the proposed method shares similarities with some recent IoU-based DETR methods (e.g., Stable DINO and Align DETR), the reviewer is convinced that these works are actually concurrent, and thus has no more doubt on their contributions. Moreover, during rebuttal, the authors have showed that compared with Stable DINO, their method can actually achieve comparable results on the popular baseline DINO and even slightly higher accuracy on a more recent method (H-DETR).Therefore, I would like to change my rating to "weak accept". | Summary: This paper proposes Rank-DETR for image object detection. The key contributions of Rank-DETR include (1) a rank-oriented architecture design, which comprises a rank-adaptive classification head and query rank layer to ensure lower FP and FN in predictions; (2) a rank-oriented loss and matching design, which introduces GIoU-aware classification loss and high-order matching cost to boost the AP under high IoU thresholds. Experiments on COCO demonstrate each component can contribute to the overall performance and align the design expedition, and RANK-DETR outperforms the current SOTA detector DINO.
Strengths: 1. The paper is well written, neatly written, clean, and easy to follow.
2. The motivation of the paper is clear, 1. Rank-oriented Architecture Design: ensure lower FP and FN 2. Rank-oriented Matching Cost and Loss: boost the AP under high IoU thresholds. Experiments demonstrate the proposed component can align well with the design intent.
3. The experiments are very solid, it compares two SOTA query-based detectors H-DETR and DINO, and outperform them. The ablation study thoroughly analyzes each component's effect and performance contribution to demonstrate its effectiveness.
Weaknesses: 1. Some module designs are not very novel. (1) Rank-adaptive Classification Head learns a class-aware and input-independent logits vector to model the class distribution and calibrate each query's score predictions. Such technique is well used in other fields like long-tailed classification/detection, and few-shot classification/detection. (2) GIoU-aware Classification Loss is very similar to Veri-Focal-Loss, except the modulating factor of positive classification loss is $t - \hat{p}[l]$ instead of $t$, and such a minor modification leads to a slight performance boost, $~0.3mAP$.
2. The Query Rank Layer design decouples content and positional query. The content query is constructed from MLP mapping of the fusing of a static query $C_l$ and output of the last decoder layer $O_l$. The positional query is constructed from PE encoding of ranked predicted boxes. This is a new design of query initialization, it will be interesting to compare it with other query initialization methods in DETR, Deformable-DETR, and DINO, but I can't find such comparisons.
3. There is no analysis or comparison of parameter FLOPs and computation cost (training & testing speed).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## To Reviewer m522
We thank the reviewer for the careful reviews and constructive suggestions. We answer the questions as follows.
---
> **"Some module designs are not very novel. (1) Rank-adaptive Classification Head learns a class-aware and input-independent logits vector to model the class distribution and calibrate each query's score predictions. Such technique is well used in other fields like long-tailed classification/detection, and few-shot classification/detection. (2) GIoU-aware Classification Loss is very similar to Veri-Focal-Loss, except the modulating factor of positive classification loss is $t-\hat{\mathbf{p}}[l]$ instead of $t$, and such a minor modification leads to a slight performance boost, $0.3$ mAP."**
A: Great point! We acknowledge the similarity of the mentioned module designs to previous techniques. However, our simple designs already showcase significant potential in exploring rank-oriented concepts for DETR-based object detectors. We hope that our efforts can inspire more advanced rank-oriented designs. Last, we welcome any further valuable suggestions to continue improving these module designs. Your insights are greatly appreciated!
---
> **"The Query Rank Layer design decouples content and positional query. The content query is constructed from MLP mapping of the fusing of a static query $C_l$ and output of the last decoder layer $O_l$. The positional query is constructed from PE encoding of ranked predicted boxes. This is a new design of query initialization, it will be interesting to compare it with other query initialization methods in DETR, Deformable-DETR, and DINO, but I can't find such comparisons."**
A: Great point! We follow your suggestion to compre the Query Rank Layer method with the query initialization methods of DETR, Deformable-DETR, and DINO as follows:
| Method | Content Query | Positional Query |
|-----------------|----------|-------|
| DETR | iterative refine | shared across layers, learnable queries |
| Deformable-DETR | iterative refine | shared across layers, introduced with bounding box |
| DINO | iterative refine | regenerated at each layer from bounding box |
| Ours | iterative refine + sorted embedding | regenerated at each layer from bounding box |
We also ablate the proposed QRL (query initialization methods) in Deformable-DETR and DINO in 1x schedule. DETR results is not provided because it need long epoch to converge.
| Method | Backbone | QRL | mAP |
|-----------------|----------|-------|-----|
| Deformable-DETR | R50 | ❎ | 43.4 |
| Deformable-DETR | R50 | ✅ | 45.0 |
| DINO | R50 | ❎ | 48.7 |
| DINO | R50 | ✅ | 49.3 |
According to the above comparison results, we can see the Query Rank Layer consistently outperforms other query initialization methods.
---
> **"There is no analysis or comparison of parameter FLOPs and computation cost (training & testing speed)."**
A: Great point! We follow your suggestion to provide the comparison of parameter FLOPs and computation cost (training & testing speed) as follows:
| Method | Backbone | Params(M) | FLOPs(G) | Training Cost (min) | Testing FPS (img/s) | mAP |
|-------------|----------|-----------|----------|---------------------|---------------------|------|
| H-DETR | R50 | 47.56 | 280.30 | 69.8 min | 19.2 | 48.7 |
| Rank-H-DETR | R50 | 49.10 | 280.60 | 71.8 min | 19.0 | 50.0 |
We conducted testing and training cost evaluations utilizing the RTX 3090 GPU. The outcomes reveal a noteworthy enhancement in detection performance through our proposed method, with a marginal increase in FLOPs and inference latency.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's feedback, which resolved all of my concerns, I would keep my initial rating "bordering accept". | Rebuttal 1:
Rebuttal: ## To AC and All Reviewers
We thank all the reviewers for their careful reviews and constructive suggestions. These constructive feedbacks has significantly contributed to the improvement of our paper. We are glad to find the initial ratings of three reviewers (Reviewer m522, Reviewer kDha, and Reviewer vAjC) are positive.
Above all, we attempt to address the major concern on the differences and comparison with Stable-DINO and Align-DETR, from the following aspects:
> **Stable-DINO and Align-DETR are con-current works**
We would like to highlight that Stable-DINO was made available on arXiv on **10th April 2023**, while Align-DETR became accessible on **15th April 2023**. It is important to note that both of these works were not formally published at the time of this submission on **11th May 2023**. We have duly mentioned this aspect in the related work section of our paper, where we discuss the relevance and relationship of our approach to these contemporaneous works.
> **Different motivation and insight**
The motivation of Stable-DINO is to **address the unstable matching problem across different decoder layers** and the motivation of Align-DETR is to **address the misalignment between classification score and localization precision**. We acknowledge the motivation of our rank-oriented loss and matching design is close to the Align-DETR.
Different from both of them, the motivation of our rank-oriented architecture design is to **prompt positive predictions and suppress the negative ones to ensure lower false positive rates**.
> **Stable-DINO and Align-DETR introduced additional technical improvements**
(1) Stable-DINO further improves DINO-DETR with other designs. According to Table 6 in Stable-DINO paper, **first, applying NMS during evaluation brings +0.2 gains (49.0->49.2). Second, The combination of dense memory fusion and NMS brings +0.4 gains (49.0->49.4). Third, The combination of position-supervised loss and position-modulated matching cost brings the other +1.0 gains (49.4->50.4)**. According to Figure 7 in Stable-DINO paper, we notice that the memory fusion method, which concatenates the 24x (4 scales x 6 encoder layers) multi-scale encoder feature maps with the 4x multi-scale backbone feature maps followed by linear projection and normalization, will bring additional computation overhead during both training and evaluation, e.g., during evaluation, the GFLOPs increases from 289.90 G to 300.12 G.
(2) Align-DETR further improves DINO-DETR with **a mixed matching strategy** (introduce more positive samples) and **a prime sample weighting scheme** (down-weight the loss for low-quality positive samples). According to Table 5 and Table 8 in Align-DETR paper, these two techniques bring 0.2 gains (50->50.2).
Different from both of them, we have proposed to improve the DETR-based methods from a novel aspect, i.e., rank-oriented architecture design, which brings +0.9 gains over the baseline. We further clarify a possible misunderstanding of Reviewer kDha (**the main boost originates from the GIoU-aware loss**). We summarize some key results (from Table 4 in the main paper) in the following Table for reference. Accordingly, we can see that the proposed rank-oriented architecture design boosts the baseline from 48.2 to 49.1 while rank-oriented matching and loss design boosts the baseline from 48.2 to 49.5, respectively.
| Rank-oriented architecture design (RCH + QRL) | Rank-oriented loss and matching design (GCL + HMC) | mAP | AP50 | AP75 |
| --- | --- | ---- |---- | ---- |
| ❎ | ❎ | 48.2 | 66.4 | 52.6 |
| ✅ | ❎ | 49.1 | 67.2 | 53.5 |
| ❎ | ✅ | 49.5 | 67.3 | 54.0 |
| ✅ | ✅ | 50.0 | 67.5 | 54.7 |
> **Concern about the smaller improvements than Stable-DINO and Align-DETR**
- First, we would like to highlight that Stable-DINO and Align-DETR **have tuned the hyperparameters and conducted all ablation experiments based on DINO-DETR**. For the experiments based on DINO-DETR, our goal is to **verify the generalization ability** and we simply **use the same hyperparameters as the experiments based on H-DETR without any tuning**. An interesting observation is that **our Rank-DETR (AP=50.0) significantly outperforms Stable-H-DETR (AP=49.2) when using the H-DETR as baseline**, where we notice that the H-DETR baseline of Stable-H-DETR is even stronger than our baseline: 48.6 vs. 48.2.
| method | mAP | AP50 | AP75 |
| ------------- | ---- |---- | ---- |
| H-DETR (Stable-DINO reproduce) | 48.6 | - | - |
| Stable-H-DETR | 49.2 | - | - |
| H-DETR (our reproduce) | 48.2 | 66.4 | 52.9 |
| Rank H-DETR | 50.0 | 67.5 | 54.7 |
- Second, to address the concerns on results over DINO-DETR, we report the detailed comparison results to Stable-DINO by using the additional special tricks as follows:
| method | NMS & Memory fusion | mAP | AP50 | AP75 |
| ------------- | --- | ---- |---- | ---- |
| DINO-DETR (our reproduced baseline) | ❎ | 48.7 | 66.1 | 52.9 |
| Rank DINO-DETR | ❎ | 49.6 | 67.0 | 54.7 |
| Rank DINO-DETR | ✅ | 50.4 | 67.9 | 55.3 |
Besides, we also attempt to leverage these techniques to improve our Rank H-DETR and we further achieve even stronger results than Stable-DINO, i.e., AP=$50.8$.
- Last, we also would like to point out another possible reason, Align-DETR (https://github.com/FelixCaae/AlignDETR) chooses the DINO baseline reproduced based on the detrex codebase (https://github.com/IDEA-Research/detrex/tree/main/projects/dino). According to their README, we find that the reproduced DINO-DETR baseline already achieves 49.4 in detrex implementation. We would like to reimplement our method on the detrex codebase if it is necessary.
👉 In conclusion, we earnestly hope that our work won't be rejected solely based on lacking direct competitive comparisons with concurrent endeavors. We firmly believe that our contribution can **offer new insights to the community on how to better exploit the ranking information for DETR-based object detectors**.
Pdf: /pdf/95a36a4bd8858b8751e78aab56db72849f1e1190.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Orthogonal Non-negative Tensor Factorization based Multi-view Clustering | Accept (poster) | Summary: Existing NMF-based multi-view clustering methods perform NMF on each view respectively and ignore the impact of between-view. To solve these problems, this paper proposes an orthogonal non-negative tensor factorization method with one-side orthogonal constraint. This method can process the multi-view data directly and can also take full advantage of the original spatial structure of the multi-view data. Extensive experiments on various benchmark datasets indicate that the proposed method can obtain satisfactory clustering performance.
Strengths: (1) The proposed method can directly consider the between-view relationship and perform Orth-NTF on the 3rd-order tensor which is composed of anchor graphs of views.
(2) The construction of anchor graph reduces the complexity of the proposed algorithm, while the tensor Schatten p-norm regularization explores the cluster structure of multi-view data.
Weaknesses: (1) The Introduction and Related work mainly introduce the NMF and its advantages and disadvantages. However, the authors did not adequately explain the motivation for designing the tensor factorization and orthogonal constraint.
(2) In section 5.2, the author only briefly introduces that their method has achieved good results without any in-depth analysis, which cannot provide any valuable insights for the readers.
(3) The authors should add more ablation experiments to clearly point out which basic design (or technical idea) of the whole method makes the largest contribution.
(4) It is not clear how to determine the optimal parameter \lambda in Eq. (6). The authors also failed to analyze in detail the impact of different parameter combinations on the performance of the method, which leads to unconvincing results.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: (1) The authors mainly introduce the NMF and its variants in Introduction and Related work. However, existing published work on multi-view clustering based on anchor graphs has not been discussed in detail.
(2) The authors should design more reasonable and complete experiments to prove the effectiveness of the proposed method. In addition, the quality of figures and tables should be improved.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors need to provide more abundant experimental results to further prove the effectiveness of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: __Q1__: The Introduction and Related work mainly introduce the NMF and its advantages and disadvantages. However, the authors did not adequately explain the motivation for designing the tensor factorization and orthogonal constraint.
__A1__: Thank you for highlighting that. Non-negative Matrix Factorization (NMF) is tailored primarily for second-order matrices. When processing third-order tensors, there's a need to first transform the tensor into a matrix before applying NMF. This step can lead to a loss of inherent spatial structural information from the third-order tensor. In contrast, Non-negative Tensor Factorization (NTF) sidesteps this issue. NTF directly decomposes third-order tensors, effectively capturing the spatial information they contain.
In the realm of multi-view clustering, conventional NMF-based methods apply NMF to each view independently. Subsequently, they combine the low-dimensional representations from different perspectives to arrive at a unified shared representation. This approach often overlooks the interrelationships between the views, which are crucial for clustering. Our model, however, directly implements NTF on the third-order tensor made up of anchor graphs from the various views. This ensures that the NTF not only acknowledges the relationships between the views but also harnesses the complementary information they offer. __Figure 1 in the provided PDF__ delineates the distinction between traditional NMF-based clustering techniques and our NTF-based approach. Furthermore, by incorporating an orthogonal constraint, our model offers distinct physical interpretability for clustering. This suggests that each row of the indicator matrix contains a single non-zero element, and the position of this element directly corresponds to the label of the respective sample.
__Q2__: In section 5.2, the author only briefly introduces that their method has achieved good results without any in-depth analysis.
__A2__: Thank you for pointing that out. Based on the experimental results presented in our paper, our method significantly outperforms other clustering approaches. This advantage may stem from the fact that our model directly factorizes the tensorized anchor graph—comprised of anchor graphs from various views—into the product of two non-negative tensors, one being an index tensor. As a result, our model effectively captures both the spatial structural information and the complementary data present in the anchor graphs from different perspectives. Additionally, with orthogonal and non-negative constraints in place, our model offers clear interpretability for clustering. This means that each row of the indicator matrix for every view contains a single non-zero element, with its position indicating the label of the associated sample. Consequently, our model can immediately provide the label without necessitating any post-processing, a step which other methods still require.
__Q3__: The authors should add more ablation experiments to clearly point out which basic design (or technical idea) of the whole method makes the largest contribution.
__A3__: Thanks very much for this constructive advice. In the experiments, we added some ablation experiments on orthogonal constraint and Schatten p-norm on four datasets. __Please see Table 1 in provided PDF__.
It can be found that, tensor Schatten p-norm regularization is overall superior to orthogonal constraint. The reason is that tensor Schatten p-norm regularization effectively characterizes both the complementary information and spatial structure information of index matrices of different views. Compared to orthogonal and tensor Schatten p-norm constraints, Joint constraints have great contribution for clustering.
__Q4__: It is not clear how to determine the optimal parameter \lambda in Eq. (6). The authors also failed to analyze in detail the impact of different parameter combinations on the performance of the method.
__A4__: Thank you for bringing that to our attention. To determine the value of $\lambda$, we initially approximate its range using the magnitude of the tensor Schatten p-norm regularization, followed by a more detailed fine-tuning within that range. The impact of varying parameter combinations on the method's performance can be seen __in Figure 3 of the provided PDF__. This figure highlights the clustering performance across different pairings of $p$ and $\lambda$.
__Q5__: The authors introduce the NMF and its variants in Introduction and Related work. However, existing published work on multi-view clustering based on anchor graphs has not been discussed in detail.
__A5__: Thanks very much. Our main contribution of the paper is to propose a non-negative tensor factorization (NTF) model with orthogonal constraint, and then apply it to tensorized graph for large-scale multi-view clustering. Considering the length limitation of paper, we did not discuss the anchor-based clustering methods in detail. It is true that, there have many anchor-based multi-view clustering methods, but all of them separately process the anchor graph of each view respectively, and then fuse them to obtain a common-shared anchor graph. In contrast, our model applies NTF directly to the third-order tensor formed by anchor graphs from different views. As a result, it not only leverages the relationships between the views but also taps into the complementary information they offer. According to your constructive advice, we will add a detail description for anchor-based clustering methods in the revised paper.
__Q6__: The authors should design more experiments. In addition, the quality of figures and tables should be improved.
__A6__: Thank you for your advice. We have added ablation experiments as well as experiments on the impact of different parameter combinations on clustering performance (__See Table 1 and Figure 3 in the provided PDF__). We have adjusted the figures and tables to improve their readability and presentation.
---
Rebuttal Comment 1.1:
Title: Replying to Rebuttal by Authors
Comment: Thank you for your detailed rebuttal. Your rebuttal has indeed helped me better understand the proposed approach. Overall, I think that the proposed approach has its distinctions but still not so solid from my viewpoint. The advantage of tesor matrix factorization for multi-view data mining is quite mature, along with the orthogonal property are also well discussed in previous tensor based solution. With a justification of the intuition given now, I'm raising my rating from "Borderline Reject" to "Borderline Accept".
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer for the positive feedback and recognition of our work. | Summary: This paper presented an orthogonal semi-nonnegative tensor factorization and proposed a novel tensorized anchor graph factorization model for Multiview clustering. Compared with existing NMF-based multi-view clustering methods, the proposed model has the following advantages: First, the proposed model directly factorizes 3-order tensorized anchor graph for clustering, while existing methods perform NMF on each view respectively. Thus the proposed method well exploits the within-view and between-views spatial structure information. Second, the proposed model has good interpretability for clustering with orthogonal non-negative constraint on tensorized soft labels of views. Third, authors use the tensor Schatten p-norm regularization as a rank approximation of the 3rd-order tensor which characterizes the cluster structure of multi-view data and exploits the between-view complementary information. Fourth, the paper presented a convergence analysis in theory. Experimental results indicate the efficiency of the proposed model on some databases.
Strengths: (1) The paper presented an orthogonal semi-nonnegative tensor factorization.
(2) The paper proposed a novel tensorized anchor graph factorization model for Multiview clustering with good interpretability for clustering by orthogonal non-negative constraint on tensorized soft labels of views, this avoids post-processing for clustering.
(3) The proposed model directly factorized 3-order tensorized anchor graph for clustering, while existing methods perform NMF does not.
(4) The paper mathematically proved the convergence of the proposed algorithm for clustering.
Weaknesses: (1) The paper does not provide the storage complexity and computational complexity.
(2) It is unclear how to select anchor points or construct anchor graph.
(3) It is unclear for the variables in (19).
(4) In algorithm 1, mu and \pho will be infinity. This will make the algorithm be unstable.
(5) In the experiments, how to select \ eta and \ lambda?
(6) Figure 6 indicates that the proposed model has good convergence, how does the performance (such as ACC) of the proposed model change?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) In the paper, how to select anchor points or construct anchor graph?
(2) In algorithm 1, mu and \pho will be infinity. This will make the algorithm be unstable.
(3) What is the storage complexity and computational complexity for the model?
(4) It is unclear for the variables in (19).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors adequately addressed the limitations of the existing NMF-based multi-view clustering methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: __Q1__: The paper does not provide the storage complexity and computational complexity.
__A1__: Thank you for the clarification. For Orth-NTF, the storage requirements for $\mathcal{G}$, $\mathcal{Q}$, $\mathcal{H}$, $\mathcal{J}$, $\mathcal{Y}_1$, and $\mathcal{Y}_2$ have complexities of $\mathcal{O}(V(m+k)n)$, $\mathcal{O}(V(n+k)k)$, $\mathcal{O}(Vnk)$, $\mathcal{O}(Vnk)$, $\mathcal{O}(Vnk)$, and $\mathcal{O}(V(n+k)k)$, respectively. Combining these, the total storage complexity for Orth-NTF is $\mathcal{O}(Vnm+6Vnk+2vk^2)$.
The process of constructing $\mathcal{S}$ has a computational complexity of $\mathcal{O}(Vnmd+Vnm\log(m))$. When updating the four variables—G, Q, H, and J—their respective computational complexities are $\mathcal{O}(Vnkm+Vnk\log(k))$, $\mathcal{O}(Vm^2 k+Vmk^2)$, $\mathcal{O}(Vnk)$, and $\mathcal{O}(2Vnk\log(Vk)+V^2 kn)$. Given that $m$, $n$, $k$, and $V$ are relatively small constants, the primary computational cost associated with updating the variables stands at $\mathcal{O}(Vnkm+Vm^2 k)$. Summing it all up, the overall computational complexity of our proposed method is $\mathcal{O}(Vnmd+Vm^2 k)$.
__Q2__: It is unclear how to select anchor points or construct anchor graph.
__A2__: Thanks very much. We adopt directly alternate sampling (DAS) to select anchors inspired by [1], and we construct anchor graph in the same way as [1]. We also explain about anchor selection and anchor graph construction in our Supplementary Material.
[1] Li, X., Zhang, H., Wang, R., and Nie, F. Multiview clustering: A scalable and parameter-free bipartite graph fusion method. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1):330–344, 2022.
__Q3__: It is unclear for the variables in (19).
__A3__: Thanks very much. Sorry for the confusion. $\overline{\Lambda}^{(v)}$ and $\overline{V}^{(v)}$ can be obtained by SVD($\overline{\mathcal{B}}^{(v)}$) = $\overline{\Lambda}^{(v)} X {\overline{V}^{(v)}}^T$, where $\overline{\mathcal{B}}^{(v)}$=$ 2\overline{\mathcal{S}}^{(v)}$ $\overline{\mathcal{G}}^{(v)}$+$\mu \overline{\mathcal{H}}^{(v)}$ + $\overline{\mathcal{Y}}_1^{(v)}$.
__Q4__: In algorithm 1, mu and \pho will be infinity. This will make the algorithm be unstable.
__A4__: Thanks very much. In the experiments, we set the maximum of mu and $\rho$ to $10^{13}$. We have explicitly pointed out this in our revised paper.
__Q5__: In the experiments, how to select \ eta and \ lambda?
__A5__: Thank you for noting that. The value of $\eta$ influences the algorithm's convergence speed, and based on our empirical observations, we set $\eta$ = 1.6. As for $\lambda$, we initially estimate its range by considering the magnitude of the tensor Schatten $p$-norm regularization and subsequently fine-tune within that established range.
__Q6__: Figure 6 indicates that the proposed model has good convergence, how does the performance (such as ACC) of the proposed model change?
__A6__: Thanks very much.
When the number of iteration increases, the clustering metric (such as ACC) overall improves gradually and tends to be constant with the convergence of the algorithm (__See Figure 2 in provided PDF__). It also indicates that our method has good clustering performances.
---
Rebuttal Comment 1.1:
Title: Review Results
Comment: I have carefully read the rebuttal and I think all my concerns have been well addressed. So I am willing to keep the final rating as ACCEPT.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer for the positive feedback and recognition of our work. | Summary: This article proposed a novel orthogonal non-negative tensor factorization strategy for multi-view clustering, which well takes into account within-view spatial structure and between-view complementary information. Meanwhile, the optimization step has good convergency.
Strengths: [a] The paper is well-written and easy to follow.
[b] The proposed model is concise and has good interpretability.
[c] The experimental results are substantial.
Weaknesses: 1. Some formulas are not strictly written, which variable to solve should be clearly written.
2. What does each letter of the matrix size represent?
3. In Algorithm 1, how to calculate the clustering label is not clear.
4. Authors are advised to report specific hyper-parameters on each dataset.
5. In Section 5.3, the author draws a conclusion that the clustering time increases linearly with the increase of anchor rate. This description seems imprecise.
6. Some typos, for example:
In Section 5.3, the Schatten p norm should be ‘Schatten p norm.’
In Section 5.2, ‘according to Tables 2 and 3’ should be ‘according to Tables 2 and 3.’
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1.Some formulas are not strictly written, which variable to solve should be clearly written.
2.What does each letter of the matrix size represent?
3. In Algorithm 1, how to calculate clustering label is not clear.
4. Authors are advised to report specific hyper-parameters on each dataset.
5. In Section 5.3, the author draws a conclusion that the clustering time increases linearly with the increase of anchor rate. This description seems imprecise.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1. The source code and datasets are encouraged to release.
2. It is encouraged that the author can discuss the practical application scenarios of multi-view clustering technology.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: __Q1__: Some formulas are not strictly written, which variable to solve should be clearly written.
__A1__: Thanks very much. We double checked our manuscript and corrected formulas to explicitly indicator which variables need to be solved.
__Q2__: What does each letter of the matrix size represent?
__A2__: Thanks very much. In our article, $n$, $m$ and $k$ represent the number of samples, the number of anchors and the number of clusters, respectively. We have explicitly pointed out the meaning of each letter of the matrix size in the revised paper.
__Q3__: In Algorithm 1, how to calculate the clustering label is not clear.
__A3__: Thanks very much. The position of the largest element in each row of the indicator matrix is the label of the corresponding sample. We have explicitly pointed out this in our revised paper.
__Q4__: Authors are advised to report specific hyper-parameters on each dataset.
__A4__: Thanks very much. The corresponding hyper-parameters for each dataset are as follows: __MSRC__: anchor rate=0.7, $p$=0.5, $\lambda$=100; __HandWritten4__: anchor rate=1.0, $p$=0.1, $\lambda$=1180; __Mnist4__: anchor rate=0.6, $p$=0.1, $\lambda$=5000; __AWA__: anchor rate=1.0, $p$=0.5, $\lambda$=1000; __Reuters__: anchor rate=0.005(anchor number=100), $p$=0.4, $\lambda$=1209800; __NoisyMNIST__: anchor rate=0.03, $p$=0.1, $\lambda$=200000. We have explicitly pointed out this in our revised paper.
__Q5__: In Section 5.3, the author draws a conclusion that the clustering time increases linearly with the increase of anchor rate. This description seems imprecise.
__A5__: Thanks very much. I corrected these inaccurate representations. It should be that the time required for clustering is approximately linearly related to the increase of anchor rate.
__Q6__: Some typos, for example: In Section 5.3, the Schatten p norm should be ‘Schatten p norm.’ In Section 5.2, ‘according to Tables 2 and 3’ should be ‘according to Tables 2 and 3.’
__A6__: Thanks very much. We double checked the manuscript and corrected them.
__Q7__: The source code and datasets are encouraged to release.
__A7__: Thanks. We're sorry we can't use any links in our reply. __We've sent an anonymized link to the AC as required__. The datasets we used is open source.
__Q8__: It is encouraged that the author can discuss the practical application scenarios of multi-view clustering technology.
__A8__: Thank you for pointing that out. Multi-view clustering techniques have been applied in a myriad of practical situations across diverse fields. To highlight a few applications:
1. In social media analysis, multi-view clustering allows for grouping based on textual content, visual features, and user network structures, thereby aiding in community identification or anomaly detection.
2. Recommender systems leverage this clustering technique across various user and item views, which can enhance the personalization, accuracy, and diversity of their recommendations. | Summary: In this paper, the authors focus on the problem of multi-view clustering using semi-non-negative tensor factorization (Orth-NTF) with a one-side orthogonal constraint. The proposed model extends Non-negative Matrix Factorization (NMF) to Orth-NTF, allowing for the utilization of spatial structure information from multi-view data to enhance clustering performance. Moreover, the authors fully leverage the complementary information embedded in different views by incorporating a tensor Schatten p-norm composed of cluster indicator matrices. To reduce computational complexity, anchor graphs are adopted instead of the original multi-view data. The authors provide an optimization algorithm for the proposed method and demonstrate its effectiveness through extensive experiments conducted on various datasets.
Strengths: 1.The overall structure of the paper is clear and comprehensive.
2.The research problem and innovative aspects are well-defined, and they are supported by sufficient experimental evidence.
3.The proposed method demonstrates promising results for the task of multi-view clustering.
4.The provided optimization method includes detailed formula derivation, enhancing the understanding of the proposed approach.
Weaknesses: 1.The paper seems to lack a detailed explanation of the advantages of extending NMF to 3rd-order tensor NMF.
2.The proposed method applies NMF to the anchor graph S. Couldn't other dimensionality reduction methods be used to learn low-dimensional representations of high-dimensional data and achieve acceleration in clustering? Why was the choice made to utilize the anchor graph?
3.I think the author should provide the code to make the experimental results more convincing.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please refer to the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: __Q1__: The paper seems to lack a detailed explanation of the advantages of extending NMF to 3rd-order tensor NMF.
__A1__: Thank you for your attention. Non-negative Matrix Factorization (NMF) is designed primarily for second-order matrices. When handling third-order tensors, one must first transform the tensor into a matrix to apply NMF. This transformation leads to a loss of spatial structural information inherent in the third-order tensor. Conversely, Non-negative Tensor Factorization (NTF) is adept at directly decomposing third-order tensors, effectively preserving the spatial information they contain.
For multi-view clustering, traditional NMF-based methods execute NMF separately on each view. They then amalgamate the low-dimensional representations from different perspectives to derive a common shared representation. However, these methods tend to overlook the interrelationships between the views, which play a crucial role in clustering. Our model, in contrast, applies NTF directly to the third-order tensor formed by anchor graphs from different views. As a result, it not only leverages the relationships between the views but also taps into the complementary information they offer.
To visually understand the distinction between prevailing NMF-based clustering techniques and our NTF-based model, __please refer to Figure 1 in the provided PDF__.
__Q2__: The proposed method applies NMF to the anchor graph S. Couldn't other dimensionality reduction methods be used to learn low-dimensional representations of high-dimensional data and achieve acceleration in clustering? Why was the choice made to utilize the anchor graph?
__A2__: Thank you for highlighting that. While other dimensionality reduction methods like PCA and LPP can indeed be utilized to derive low-dimensional representations and handle large-scale data effectively, NMF offers certain unique advantages. Compared to PCA and LPP, NMF boasts superior interpretability. When augmented with an orthogonal constraint, NMF gains even clearer physical interpretability in the realm of clustering. This means that each row of the indicator matrix possesses just one non-zero element, with the position of that element directly signifying the label of the corresponding sample. Additionally, the anchor graph excels in clustering due to its ability to adeptly encapsulate relationships between data points of any shape. Inspired by these strengths, we opted for anchor graph in our approach to large-scale multi-view clustering.
__Q3__: I think the author should provide the code to make the experimental results more convincing.
__A3__: Thanks very much. We're sorry we can't use any links in our reply. __We've sent an anonymized link to the codes to the AC as required__.
---
Rebuttal Comment 1.1:
Comment: The author's response completely addressed my concerns, and I believe this article should be accepted.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer for the positive feedback and recognition of our work. | Rebuttal 1:
Rebuttal: Supplementary PDF is uploaded here as required.
Pdf: /pdf/ab56da7caf5a4b78b87cf91a04e67d04885ddca2.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DYffusion: A Dynamics-informed Diffusion Model for Spatiotemporal Forecasting | Accept (poster) | Summary: This paper proposes DYffusion, an approach that mimics the diffusion models for spatiotemporal forecasting.
It treats the noising process as interpolation (parameterized by $I_\phi$) and the denoising process as forecasting (parameterized by $F_\theta$), i.e., it reimages the noising step $T$ in the original diffusion models as the temporal step in forecasting.
DYffusion parameterizes the probabilistic transitions in forecasting continuous-time trajectories, which is similar to parameterizing the diffusion of SDEs.
It performs competitively on spatiotemporal forecasting tasks, including sea surface temperatures, Navier-Stokes flows, and spring mesh systems, in terms of probabilistic skill score metrics.
Strengths: 1. DYffusion reimagines the continuous-time probabilistic forecasting problem as a diffusion process. It benefits from existing diffusion algorithms and methods to accelerate the inference sampling.
2. DYffusion achieves the best or competitive scores while significantly reducing the time cost compared to diffusion approaches.
Weaknesses: 1. Eq. (6) is incorrect. There should be a differential instead of a derivative in the integral.
2. DYffusion's method is closely related to neural ODE and SDE. However, empirical studies lack corresponding baselines. For example, one possible baseline could involve extrapolating $dF_\theta/ds$ using an ODE solver.
3. It is unfair to use $F_\theta$ trained with Algorithm 1 in the baseline method Dropout. The forecaster used in Dropout should be trained with an objective such as $||F_\theta(x_{t+i}, i)-x_{t+h}||^2$ instead.
4. One of the main contributions mentioned is that DYffusion reduces complexity. However, a complexity analysis is missing.
5. Neural SDE [1,2,3,4] parameterizes the stochastic dynamics for modeling continuous-time processes and is therefore inherently suitable for probabilistic continuous-time forecasting. However, there is no discussion on a range of related works on neural SDE.
[1] Deng, Ruizhi, et al. "Modeling continuous stochastic processes with dynamic normalizing flows." Advances in Neural Information Processing Systems 33 (2020): 7805-7815.
[2] Liu, Xuanqing, et al. "Neural sde: Stabilizing neural ode networks with stochastic noise." arXiv preprint arXiv:1906.02355 (2019).
[3] Jia, Junteng, and Austin R. Benson. "Neural jump stochastic differential equations." Advances in Neural Information Processing Systems 32 (2019).
[4] Kidger, Patrick, et al. "Efficient and accurate gradients for neural SDEs." Advances in Neural Information Processing Systems 34 (2021): 18747-18761.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Why does Dropout perform so poorly at $t=2$ in Figure 3? Does this observation indicate the mentioned Weakness 3?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: This paper includes a separate section for discussing limitations. There is no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive comments regarding the novelty, connection to existing diffusion models, and efficiency of our approach, as well as your valuable feedback to which we respond below.
**Q1:**
> Eq. (6) is incorrect. There should be a differential instead of a derivative in the integral.
**A1:**
Yes, there should be a $ds$ at the end of the equation. We apologize for the confusion, and have fixed this for the revised version.
**Q2:**
> DYffusion's method is closely related to neural ODE and SDE. However, empirical studies lack corresponding baselines. For example, one possible baseline could involve extrapolating dF_\theta/ds using an ODE solver
**A2:**
Thank you for making the connection of DYffusion to neural ODE/SDE methods.
However, **DYffusion is not directly related to neural SDEs**. They are different types of deep generative models (DGM). DYffusion is a diffusion model-based DGM that leverages the theory of SDE to learn high-dimensional distributions via score matching. In contrast, neural ODE/SDE is a type of autoregressive DGM that assumes the hidden states of a neural network to follow a particular ODE/SDE dynamics. Despite the shared use of SDE in these two lines of works, they have very different learning mechanisms and applications.
To the best of our knowledge, existing neural SDE methods (including your references [1-4]) only focus on low-dimensional problems. For example, the maximum dimensionality in [1] and [4] is four, respectively two, using an air quality dataset. Meanwhile, [2, 3] do not study dynamics forecasting: [2] only has image experiments, and [3] studies event prediction of low dimensional datasets. In our work, the spring mesh dataset with 400 = 10 x 10 x 4 has lowest dimensionality among all datasets, and we note that this dimensionality can further increase when performing multi-step forecasting.
The reason for why these neural SDE papers only experiment with low-dimensional multivariate timeseries is exactly because of the ODE/SDE assumption on the hidden state mentioned above. Due to the need of solving a ODE/SDE through a numerical solver, neural ODE/SDE currently struggle with high-dimensional data such as video data in our work. We would be more than willing to add any baselines that have been applied to high-dimensional spatiotemporal data similar to the one we study in this paper. We would appreciate any such pointers to potential baselines.
**Q3:**
> It is unfair to use $F_\theta$ trained with Algorithm 1 in the baseline method Dropout. The forecaster used in Dropout should be trained with an objective such as $||F_\theta(x_{t+i},i)−x_{t+ℎ}||^2$ instead.
**A3:**
The Dropout baseline was NOT trained with Alg. 1 (that one was only designed/used for DYffusion). Instead it was trained on the objective $||F_\theta(x_t, i) - x_{t+i}||^2$ for $1\leq i \leq h$. We realize that lines 237-238 and 242-244 may be confusing in this respect, and we will improve their clarity in our revised version. We will add the aforementioned objective that we used to train the Dropout baseline to the appendix (and refer to it in the main text). When we say that the baseline is trained _"analogously to the DYffusion forecaster"_ (line 243-44) we meant to only refer to the fact that both methods rely on a time-conditioning mechanism.
**Q4:**
> One of the main contributions mentioned is that DYffusion reduces complexity. However, a complexity analysis is missing.
**A4:**
See table 7 in the appendix for a comparison on how DYffusion effectively reduces the neural network input and output dimensionality, memory needs, and efficiency compared to a video diffusion model/MCVD.
**Q5:**
> Neural SDE [1,2,3,4] parameterizes the stochastic dynamics for modeling continuous-time processes and is therefore inherently suitable for probabilistic continuous-time forecasting. However, there is no discussion on a range of related works on neural SDE.
**A5:**
Thank you for the references. We will make sure to discuss and cite them, as well as the neural SDE literature more generally, in our revised version. As mentioned in **A2**, at this moment we don't believe that neural SDE methods are an appropriate baseline for the high-dimensional spatiotemporal forecasting problem studied in our paper, since all Neural SDE papers that we are aware of (incl. [1-4]) only cover low dimensional time series.
**Q6:** _Figure 3_
**A6:**
> Why does Dropout perform so poorly at t=2 in Figure 3?
That is a good observation. Generally, the beginning of the Navier-Stokes trajectories look qualitatively different to following timesteps where the fluid is already clearly "flowing". As such it is possible that those initial timesteps are harder to qualitatively forecast well. Plus, by definition, there are only relatively few training examples of the very start of the training trajectories compared to mid/end points of them. This might be especially a problem for underdispersive models (SSR < 1) that do not capture the full data distribution well.
> Does this observation indicate the mentioned Weakness 3?
No, see our answer **A3**.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification on the differences between DYffusion and neural ODE/SDE that I had overlooked. Based on your helpful explanation, I have some additional questions to better understand your work:
Given that the horizon $h$ remains fixed once chosen, and considering that the training of the interpolator $\mathcal{I}$ is independent of the forecaster $\mathcal{F}$, does this imply that alternative forecasting models capable of predicting $x_{t+h}$ from $x_t$ can also be employed to achieve continuous forecasting in conjunction with the trained interpolator $\mathcal{I}$? If such is the case, it would be beneficial to conduct further ablation studies to elucidate the **necessity and advantages of the forecaster** $\mathcal{F}$. These studies could shed light on how the designed diffusion training and inference contribute to the efficacy of the forecaster.
For example, **exploring diffusion settings in common practice** could be insightful, i.e., investigating a forecaster that generates $x_{t+h}$ by denosing from Gaussian noise conditioned on $x_t$ might be worth considering, especially given the lack of evidence indicating that the intermediate $\hat{x}_{t+i_n}$s are utilized. It would be valuable to address why such a configuration was not selected and how it contrasts with the chosen approach.
As for neural SDE on high-dimensional data,I would like to suggest considering Neural-SPDE [5], which applies neural SDE to Navier-Stokes equations in a 64x64 grid setting. Its codebase provides implementations of Neural-SPDE as well as various baselines, including NCDE and NRDE. I recommend leveraging these baselines to showcase the effectiveness of DYffusion against neural ODE/SDE based approaches, if time permits for an empirical comparison.
[5] Neural Stochastic PDEs: Resolution-Invariant Learning of Continuous Spatiotemporal Dynamics, NeurIPS 2022.
---
Reply to Comment 1.1.1:
Comment: We genuinely thank you for carefully reading through our rebuttal and acknowledging the significance and novelty of our approach, especially compared to neural ODE/SDE methods.
> Given that the horizon ℎ remains fixed once chosen, and considering that the training of the interpolator $\mathcal{I}$ is independent of the forecaster ${F}$, does this imply that alternative forecasting models capable of predicting $x_{t+h}$ from $x_t$ can also be employed to achieve continuous forecasting in conjunction with the trained interpolator $\mathcal{I}$?
Yes, in principle, this would be possible.
> If such is the case, it would be beneficial to conduct further ablation studies to elucidate the necessity and advantages of the forecaster ${F}$.
Please note the new figure 5 that we will include to our revised appendix (anonymous gdrive link: https://drive.google.com/file/d/1jFYdEn1tAkJ--HsndoXRndjJ2X_2qPTk/view?usp=sharing), where we ablate exactly this. Here, we show that the forecasts of $x_{t+h}$ of the forecaster $F$ gain more skill as the reverse diffusion process progresses (i.e. as the corresponding time of the diffusion step comes close from $t$ to $t+h$).
> For example, exploring diffusion settings in common practice could be insightful, i.e., investigating a forecaster that generates $x_{t+h}$ by denosing from Gaussian noise conditioned on $x_t$ might be worth considering
We thank the reviewer for this idea, and agree that this is an interesting method to consider. Honestly, we did not think of this method since it has not been proposed or used before. We believe that future work can explore if such a method will actually perform well. In this paper, we focus on our own proposed method, common multi-step forecasting approaches, and video diffusion models as baselines. Notably, while video diffusion is the most straight-forward approach for applying conventional diffusion models to multi-step forecasting, it had not been applied before to this problem.
> (..) given the lack of evidence indicating that the intermediate $\hat{x_{t+i_n}}$ s are utilized
This is wrong. When the optional line 6 of Alg. 2 is disabled, the intermediate $\hat{x_{t+i_n}}$ are used as forecasts for the intermediate timesteps between $t$ and $t+h$. This is what is actually being done in our SST dataset experiments. In addition, you can see in our ablations table (*No ref.* row in Table 5 in the appendix) that utilizing these intermediate $\hat{x_{t+i_n}}$ works well (but can sometimes be outperformed by enabling line 6 in Alg. 2).
> As for neural SDE on high-dimensional data,I would like to suggest considering Neural-SPDE [5], which applies neural SDE to Navier-Stokes equations in a 64x64 grid setting. Its codebase provides implementations of Neural-SPDE as well as various baselines, including NCDE and NRDE. I recommend leveraging these baselines to showcase the effectiveness of DYffusion against neural ODE/SDE based approaches, if time permits for an empirical comparison.
We thank you for the pointer to this baseline. In the limited time frame we had, we have done our best to run neural SPDE as a baseline, focusing on the spring mesh dataset since it has the lowest dimensionality amongst all our datasets. Please see the resulting figure in the following anonymous gdrive link: https://drive.google.com/file/d/1IoQuNvNKAphrLbBX7zUHTPsd2V5S96Po/view?usp=sharing
This figure is the same as Fig 7b) of our joint rebuttal PDF, but extended by your proposed neural SPDE baseline. As you can see, our baselines and DYffusion outperform it.
To attain this result, we run neural SPDE with ``n_iter=1, modes1=100, modes2=100, hidden_channels=32, solver=’fixed_point’ ``and a training horizon of ``h=100``. We note that increasing ``n_iter`` leads to significantly worse performance, and similarly for reducing the number of modes. We also would like to note that to our understanding neural SPDE models the noise W to be a sample path from a Wiener process, but does not really model the distribution of the forecast. Thus, the resulting forecasts are deterministic. Lastly, neural SPDE shines when the noise forcing is known (in their paper results $u_0 \rightarrow u$, where the noise $\xi$ is unknown, neural SPDE does not perform significantly better than baselines), which is not the case for our datasets or can be expected in real-world applications. | Summary: In this paper, the authors propose to build on diffusion model for modelling spatio-temporal data.
In particular, they first train a time-dependent interpolation network which learn to interpolate the temporal dynamics given a frame at the horizon time $x_{t+h}$, a frame $x_t$, and an index $i$ interpolating between the two distributions.
Then, the idea is to frame a forecast model as a denoising model between $x_t$ and $x_{t+h}$, that is given an interpoled frame to be able to recover the original 'denoised' $x_{t+h}$ frame.
This approach allows for memory efficient multi-step predictions, as opposed to existing diffusion models for videos.
They apply the developed model to the forecasting of sea surface temperatures, Navier-Stokes flows, and spring mesh systems.
Strengths: - The main advantage of the proposed approach seems to be the small memory footprint as it is constant w.r.t. the horizon, in contrast with standard dynamics forecasting (if I understand correctly).
- It also allows for 'upscaling' the resolution, or for making prediction at any time value, thus is not constrained to a regular discretisation of the time axis.
- The experiment on the sea surface temperatures dataset is quite promising, as the proposed method is able to accurately forecast not only the predicted mean but also the uncertainty with a lower computational cost than the video diffusion model MCVD. It would be valuable to explore whether one could obtain better performance at the expense of more computational cost.
Weaknesses: - I found the writing to be confusing at times, I think that Section 3 could be better conveyed, in particular the choice of notations, but also the relation with diffusion models, and the 'cold posterior' sampling scheme.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: - Equation 3: How is the architecture handling the two inputs $x_t$ and $x_{t+h}$? By doubling the number of input channels?
- line 119: Why do we need this? This deserves more explanation.
- Equation 4: Why do we need the interpolator network here? Can't we directly predict $s^{(n+1)}$ with $F_\theta(s^{(n)}, i_{n+1})$? Aren't we actually extrapolating with the interpolator here?
- Algorithm 2: What is happening line 4? How does this iteratively refine the prediction of $x_{t+h}$? This is likely worthy of some more explaination.
- Equation: Isn't this ODE implying that the dynamics is Markovian?
- line 249: What is the Continuous Ranked Probability Score (CRPS)? Worth at least giving a high level idea I think.
- line 268-269: Why not training the DDPM with more diffusion steps? Since the number of steps is an hyperparameter for all methods it would be easier to compare them with the same number of steps, albeit could be varied to see how performance vs computation evolves.
- Table 1: Would you know why the performance of the models is widely different betweent the SST and the Navier-Stokes datasets?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive comments regarding the memory efficiency, continuous-time forecasts, and SST experiments of our work, as well as your valuable feedback to which we respond below.
**Q1:** _Confusing section 3_
**A1:**
> I found the writing to be confusing at times, I think that Section 3 could be better conveyed, in particular the choice of notations, but also the relation with diffusion models, and the 'cold posterior' sampling scheme.
Thank you for bringing this to our attention. We will add a glossary to the appendix that contains all our choice of notations in a single place. Could you let us know which specific parts were confusing? We would like to address these.
Additionally, we will write out the objective for a conventional diffusion model (Eq. (1)) alongside our corresponding objective out (Eq. (3)) to make the connection in terms of analogous objectives more clear.
Concretely, for a "generalized diffusion model" [2] (equation 1):
$$||R_\theta(D(\mathbf{s}^{(0)}, n), \mathbf{x_t}, n) - \mathbf{s}^{(0)}||^2 = ||R_\theta(\mathbf{s}^{(n)}, \mathbf{x_t}, n) - \mathbf{s}^{(0)}||^2,$$
where $\mathbf{s}^{(0)} = \mathbf{x_{t+1:t+h}}$, and $\mathbf{s}^{(n)}$ is a noisy version of $\mathbf{s}^{(0)}$ (the level of noise increases with $n$).
And for DYffusion (equation 3):
$$||F_\theta(\mathcal{I_\phi}(\mathbf{x_t}, \mathbf{x_{t+h}}, {i_n}), {i_n}) - \mathbf{x_{t+h}}||^2
= ||F_\theta(\mathcal{I_\phi}(\mathbf{x_t}, \mathbf{s}^{(N)}, {i_n}), {i_n}) - \mathbf{s}^{(N)}||^2
= ||F_\theta(\mathbf{s}^{(n)}, {i_n}) - \mathbf{s}^{(N)}||^2,$$
where $\mathbf{s}^{(N)} = \mathbf{x_{t+h}}$ and $\mathbf{s}^{(n)} \approx \mathbf{x_{t+i_n}}$ is now a stepped backward in time version of $\mathbf{s}^{(N)}$. Note that the diffusion step indexing (superscript $n$) for DYffusion is reversed so that it aligns with the temporal indexing (subscript $t$), such that e.g. $\mathbf{s}^{(n)} \approx \mathbf{x_{t+i_n}}$ temporally precedes $\mathbf{s}^{(n+1)} \approx \mathbf{x_{t+i_{n+1}}}$. Accounting for the opposite order of indexing, the similarity between both approaches becomes clear.
> the 'cold posterior' sampling scheme
The cold sampling algorithm is directly taken from [2] (their Alg.2; only the notation is adapted), and thus we refer the reader to [2] for intuition on how/why it works. For example, an alternative to sample from DYffusion (or any "generalized diffusion model") would be to replace line 4 in Alg. 2 with simply: $\mathbf{x_{t+i_{n+1}}} = \mathcal{I_\phi}(\mathbf{x_t}, \hat{\mathbf{x_{t+h}}}, i_{n+1})$. This corresponds to the _naive sampling_ algorithm of [2] (their Alg. 1) which performs worse than cold sampling as shown in [2] and in our new Table 10 in the joint rebuttal PDF. We will make this more clear in our revised version.
**Q2:** _More questions_
**A2:**
> Equation 3: How is the architecture handling the two inputs $\mathbf{x_t}$ and $\mathbf{x_{t+h}}$? By doubling the number of input channels?
Yes, exactly. We have updated the text to mention this explicitly.
> line 119: Why do we need this? This deserves more explanation.
Essentially, the look-ahead loss term simulates one step of the sampling process (Alg. 2) and backpropagates through it, so that the network is trained with an objective that is closer to (partially mimics) the sequential sampling process.
> Equation 4: Why do we need the interpolator network here? Can't we directly predict $\mathbf{s}^{(n+1)}$ with $F_\theta(\mathbf{s}^{(n)}, i_{n+1})$? Aren't we actually extrapolating with the interpolator here?
$F_\theta$ always forecasts $\mathbf{x_{t+h}}=\mathbf{s}^{(N)}$ (see line 109), so we cannot use it to predict $\mathbf{s}^{(n+1)}$. As noted in line 132, $\mathbf{s}^{(n+1)}$ corresponds to $\mathbf{x_{t+i_{n+1}}}$. We will make clear that for all $n \in \{1, ..., N-1\}$ it holds that $0 < i_n < h$, so that $\mathbf{s}^{(n+1)}=\mathbf{x_{t+i_{n+1}}}$ is always between $\mathbf{x_{t}}$ and $\mathbf{x_{t+h}} \approx F_\theta(\mathbf{s}^{(n)}, i_{n+1})$, and thus the interpolator can be used to interpolate the timestep $t+i_{n+1}$.
> Algorithm 2: What is happening line 4? How does this iteratively refine the prediction of x_t+ℎ? This is likely worthy of some more explaination
See the last part of our response **A1** (regarding _"the 'cold posterior' sampling scheme"_). Essentially, it allows for more robust sampling over naive sampling defined above.
> Equation: Isn't this ODE implying that the dynamics is Markovian?
No. We assume you are referring to equation (5) or (6): The dynamics is not Markovian because each step in the prediction is dependent on $\mathbf{x_t}$ (initial state), in addition to the previous state $\mathbf{x_s}$.
> line 249: What is the Continuous Ranked Probability Score (CRPS)? Worth at least giving a high level idea I think.
The CRPS is a common metric for probabilistic forecasts [12, 19, 48, 50, 57]. It penalizes both absolute skill as well as sharpness of the ensemble of forecasts. It rewards small spread (i.e. sharpness) if the forecast is accurate, and measures the difference between the forecasted and observed cumulative distribution function (CDF). For a deterministic forecast, the CRPS reduces to the mean absolute error. We will add a more detailed explanation to our revised paper.
> line 268-269: Why not training the DDPM with more diffusion steps?
We tried 1000 diffusion steps for the DDPM as well, but found it decreased the performance on the SST dataset.
> Table 1: Would you know why the performance of the models is widely different between the SST and the Navier-Stokes datasets?
We are also intrigued by this observation. Our current hypothesis is that the limited dataset size of the Navier Stokes/spring mesh dataset could be an important factor. Indeed, conventional diffusion models are data hungry since they need to learn how to map a noise distribution to the high-dimensional data distribution. We hope to investigate this more in the future.
---
Rebuttal Comment 1.1:
Title: response
Comment: Thanks for the response to my comments!
> Could you let us know which specific parts were confusing? We would like to address these.
I think it's partly due to the various indices $h$, $i_n$, $N$ etc. What's more it took me a 2nd read to eventually understand properly the interplay between the _forecaster_ and the _interpolator_. Thus I'd focus on this.
> 'cold posterior' sampling scheme
I had a closer read at the related paper and things are now clarified. The additional results from Table 10 also brings additional empirical evidence for using this scheme. I still believe that it's important to introduce this scheme a bit more than in the submitted manuscript.
I've updated my score to reflect the clarifications and the fact that part of my concerns have been addressed. The main one remaining being clarity and presentation, yet this is tricky to address without being able to update the manuscript. I hope that the authors would improve on this so that most can benefit from this submission.
---
Reply to Comment 1.1.1:
Comment: We genuinely thank you for your feedback and taking the time to read through both our clarifications and the related paper introducing 'cold sampling'. We will introduce this scheme more in our revised paper, and use the easier to understand 'naive sampling' counterpart as a starting point for introducing the better performing cold sampling.
> What's more it took me a 2nd read to eventually understand properly the interplay between the forecaster and the interpolator. Thus I'd focus on this.
Thank you for your feedback! Do you think that Figure 6a) in our appendix makes the interplay more clear, potentially extended by explicitly adding the interpolator and forecaster symbols to their respective arrows? Would be beneficial to add it to the main text? | Summary: This paper proposes a new forecasting model for spatiotemporal data. The idea is based on separately training an interpolator and a forecaster network and applying them in an alternating fashion at inference time to iteratively refine the forward prediction. The inference procedure loosely resembles the denoising process in a diffusion model, where one computes a new denoising target and moves slightly towards it at every step. Numerical experiments are conducted on several datasets to validate its skills.
Strengths: * The idea of iteratively refining forward predictions is interesting and quite novel to my knowledge.
* The paper is generally well written and easy to follow.
* The numerical experiments cover multiple non-trivial tasks and show moderate improvement in the metrics against benchmarked diffusion-based and ensemble prediction models.
Weaknesses: * Although advertised as a diffusion model, the actual connection (at least to diffusion-based generative models) is handwavy at best. It is not clear that the underlying process arising from the defined noising and denoising operations form a well-defined generative model (for example see how DDPM is derived with clearly defined conditional and marginal distributions at each step). Only the variational posterior is provided in equation (4).
* The model is mostly deterministic (except for the last step that injects some Gaussian noise) so I think it’s important to show comparison against deterministic forecast models. The high memory footprint for multi-step training is somewhat fair, but there exist ways to improve single-step models (e.g. noise injection, curriculum training, etc.) which should be considered when validating the proposed model. This is necessary to justify doing the additional steps of interpolation.
Overall I feel in its current state given the strengths and weaknesses I vote for borderline reject tentatively.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: * Equation (4), the second line is not a probability. Do you mean a $\delta$ distribution at the interpolator output?
* Also equation (4), adding Gaussian noise only at the last step looks a bit cryptic to me. What is this trying to achieve besides adding randomness to the prediction and how is the noise level $\sigma$ determined?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors mention that input and output space must be the same and that inference costs are higher than predicting directly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for the positive comments regarding the novelty, clarity, and experiments of our work, as well as your valuable feedback to which we respond below.
_**Potential misunderstanding:**_ We would like to point out that there seems to be a key misunderstanding regarding our method being deterministic while it is actually probabilistic . As noted in our problem statement section (see Setup, lines 65-67) we specifically study the problem of _probabilistic_ forecasting. Consequently,, our interpolator network, $\mathcal{I_\phi}$, is designed to produce stochastic outputs (see the abstract, Algorithm 1 (Stage 2, line 1), and lines 106 & 111). We achieve this by enabling dropout at inference time. We show that this is a key component of our framework in the last two ablation rows (No. Dr, and No Dr. & $\sigma_\epsilon$) of Table 5 in the Appendix.
Based on this misunderstanding, we will strive to make our generative formulation more clear in the revised methodology section as mentioned in answer A3 below.
**Q1:** _Connection to conventional diffusion models_
**A1:**
> Although advertised as a diffusion model, the actual connection (at least to diffusion-based generative models) is handwavy at best. It is not clear that the underlying process arising from the defined noising and denoising operations form a well-defined generative model
Our work builds upon cold diffusion [2], which "_paves the way for generalized diffusion models that invert arbitrary processes._" The cold sampling algorithm, which we use to sample from DYffusion (see Alg. 2), is a generalization of DDIM sampling for "generalized diffusion models" (see Appendix A.6 of [2]) and a key ingredient to make DYffusion work (see Table 10 of the joint rebuttal PDF). Our proposed forward and reverse processes are specifically designed to fall under this framework.
*Given this context, it becomes clear that DYffusion is a generative model for forecasting that falls in the category of "generalized diffusion models."* We can see how this relationship may not have been clear in the text, and have added a new paragraph discussing these connections to our appendix.
> DDPM is derived with clearly defined conditional and marginal distributions at each step. Only the variational posterior is provided in equation (4).
You are correct that DYffusion loses some of the benefits of using a simple Gaussian distribution as the forward process. This is because DYffusion's forward process is based on a stochastic interpolator network, $\mathcal{I_\phi}$. We believe that this is a fair trade-off to make, especially because ultimately it is the posterior that we care about for forecasting. Future work can explore regaining some of the benefits from DDPM/DDIM.
**Q2:** _Comparison against deterministic models_
**A2:**
> The model is mostly deterministic (except for the last step that injects some Gaussian noise) so I think it’s important to show comparison against deterministic forecast models
As mentioned in the "potential misunderstanding" paragraph above, DYffusion is NOT a deterministic model. Its main source of stochasticity comes from the forward process/interpolator network. As such, deterministic models are not an adequate baseline and metrics such as CRPS and SSR are necessary for properly evaluating the probabilistic forecasts. In addition, we would like to highlight that our Perturbation baseline forecasts deterministically (see lines 649-653) and allows for the assessment of an ensembled deterministic model you might be looking for. **Also, please see the discussion of Fig. 7 in the joint rebuttal and PDF where we compare against a deterministic single-step model.**
> The high memory footprint for multi-step training is somewhat fair, but there exist ways to improve single-step models (e.g. noise injection, curriculum training, etc.) which should be considered when validating the proposed model.
Thanks for the suggestion. Noise injection and curriculum training have been only studied in the context of deterministic forecasting, not probabilistic forecasting as our paper. These techniques are orthogonal to the contributions proposed by this work to address multi-step forecasting challenges. For example, noise injection is a complementary method that can be added to further improve DYffusion too.
While we would appreciate a reference for "curriculum training", we believe that you are thinking of approaches like the one used by [35]. It is one particular kind of performing multi-step training. In this paper, we choose the more common approach of directly predicting any of the multiple steps in a single forward pass (i.e. without needing to backpropagate gradients recursively as in [35]). An extensive benchmarking on how to best perform multi-step training is beyond the scope of this paper.
**Q3:** _Equation (4)_
**A3:**
> Equation (4), the second line is not a probability. Do you mean a \delta distribution at the interpolator output?
As mentioned in the "potential misunderstanding" paragraph above, the interpolator network is designed to be stochastic by enabling dropout at inference time. We will make this more clear in our writing. For example, we have updated the text to make it explicit that $\mathcal{I_\phi}$ depends on a random variable $\xi$ (here, the randomly dropped out weights of the neural network) by writing $\mathcal{I_\phi}(\mathbf{x_t}, \mathbf{x_{t+h}}, i | \xi)$.
> Also equation (4), adding Gaussian noise only at the last step looks a bit cryptic to me. What is this trying to achieve besides adding randomness to the prediction and how is the noise level determined?
We added Gaussian noise to make our Eq. (4) be consistent with Eq. (10) of DDIM [60]. However, in practice (see our Alg. 2) we do not add any Gaussian noise in the last step (as in DDIM). This is just of notational nature, so wee will remove this part in our revised version.
---
Rebuttal Comment 1.1:
Comment: Thanks for your clarifications and I apologize for not registering the obvious stochasticity introduced by inference dropout in the interpolator. That said, I do not fully agree that deterministic models are irrelevant here as (a) comparison is done against MSE which is a deterministic metric and (b) probabilistic predictions may be obtained from deterministic models using ensemble with perturbed initial conditions. Uncertainties in probabilistic predictions inevitably contain both model errors and internal variability of the system and the latter is reflected in a setup like (b) to the very least.
I have another follow up question regarding inference speed: how does the cost of the interpolator compare to that of the forecaster? The reason I am asking is that it is more fair to compare performance under the same constraint. For example, one might argue that the fair comparison would be to run an autoregressive model with half the step size (assuming that the autoregressive stepper is similar to the forecaster and interpolator alone). Superior metrics in this benchmark would be much stronger evidence that the guessing long into the future and iteratively refining it is indeed better than looking short-term only.
---
Reply to Comment 1.1.1:
Comment: Thanks for acknowledging the misunderstanding. We hope that you find our manuscript, and especially its originality and significance to the community, more convincing now.
> That said, I do not fully agree that deterministic models are irrelevant here as (a) comparison is done against MSE which is a deterministic metric and (b) probabilistic predictions may be obtained from deterministic models using ensemble with perturbed initial conditions. Uncertainties in probabilistic predictions inevitably contain both model errors and internal variability of the system and the latter is reflected in a setup like (b) to the very least.
We agree. For point (a), please see our new comparison against the deterministic models from [43] in Fig. 7 of our joint rebuttal PDF. For point (b), please note that we explicitly include this baseline in our Table 1, see the ``Perturbation`` row.
> I have another follow up question regarding inference speed: how does the cost of the interpolator compare to that of the forecaster?
This depends on the choice of the corresponding network architectures. Usually, the cost of the interpolator will be at most that of the forecaster. This is because interpolation is an easier task than forecasting, so we can expect to do fine by using an architecture with the same or lower complexity as the forecaster. In our experiments and for simplicity, all interpolator and forecaster networks share the same architecture for the respective datasets. As noted in the appendix B.5.1, we do halve the hidden dimensions of the interpolator relative to the forecaster network on the SST dataset.
> The reason I am asking is that it is more fair to compare performance under the same constraint. For example, one might argue that the fair comparison would be to run an autoregressive model with half the step size (assuming that the autoregressive stepper is similar to the forecaster and interpolator alone). Superior metrics in this benchmark would be much stronger evidence that the guessing long into the future and iteratively refining it is indeed better than looking short-term only.
We understand your concern. Unfortunately, we already train/evaluate on the highest possible temporal resolution, so it is not possible to half the step size. That being said, this proposed baseline would be effectively training on a horizon of half the length of before, and requiring twice the number of autoregressive steps for our full rollout evaluations on Navier-Stokes and spring mesh. The potential enhancement of rollout metrics through such an approach remains uncertain.
It is worth noting that we have duly addressed and demonstrated that DYffusion operates at a slower pace compared to certain single-forward pass baselines, as substantiated by Table 1 and our discussion within the limitations section. Notably, the reassuring aspect lies in the fact that DYffusion's heightened computational demands during inference, though still more efficient than conventional diffusion models, consistently correlate with improved predictive performance in comparison to the established baselines. | Summary: In this paper, authors tackle the long-term forecasting problem applied to dynamics system. To solve this problem, they propose to use diffusion principle along with interpolating and forecaster mechanisms. The former interpolates timestep data in between lookback and target windows (therefore, at a lower resolution).
Authors evaluate their proposal on three datasets, analyze and interpret the results as well as further discuss directions for improvements.
Strengths: * Interesting approach
* Extensive ablation study, with summary in the main paper section
* Interesting discussion of the results
Weaknesses: * Reproducibility (no code, Navier-Stokes and Spring-Mesh experiment set-ups are presented in the table caption, but I wonder if they are complete for anyone who would like to re-run the same experiments)
* Comparison with the main reference [43] is in my opinion very subjective and should be improved (as authors build-up on [43] for two datasets, it seems to me that [43] should be included as a baseline)
* Require some proof-reading
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: The proposal is very interesting and sound promising. However, in my opinion, we are missing a representation of the proposed architecture to better understand the paper and how each module interacts with each other.
## Reproducibility
The details for training /validation and testing for Navier-Stokes (NSFlow) and Spring-Mesh (SM) are not provided in the dataset description, but in the experiments caption. And yet we can wonder if they are complete, which could make more difficult to reproduce results.
## Performance comparison
I think we are missing the time column in table 2 and the one for NSFlow experiment in Table 1.
`For the out-of-distribution test set of the Navier-Stokes benchmark, the results are almost identical to the one in Tab. 1, so we do not show them.` Why not include them in the appendix and let readers be the judge and make their own opinion?
## Comparison with reference [43]
As authors build up on ref. [43] models for NSFlow and SM, why are they not comparing with the model (unet and CNN) from such a reference. Especially looking at results from [43], for NSFlow from Figure 5, unet seems to achieve MSE between 0.05 and 0.007. These variations depend on the number of obstacles and there might be some difference in the set-up, but it would be good to see how DYffusion do compared to these baselines. As is it does not look like to me that the proposed model is doing better contrary to what authors are claiming. Other readers might feel the same and wonder what is the advantages of DYffusion. As a consequence, it would be good to define precisely the condition of NSFlow and SM and compare with original model.
`It is worth noting that our reported MSE scores are significantly better than the ones reported in [43]` I have difficulty agreeing with this claim. First, results in [43] are not presented in a table format so it is more difficult to compare. Authors should either provide their results with the same representation (stepMSE vs. OoD MSE) for instance in the appendix, for reader to better judge and compare both solutions. Or auhtors should include [43] models as mentioned above as a baseline in their table 2 (or both). While clearly setting the configuration of the forecast to make sure that we are dealing with similar set-up.
`our MSE results significantly surpass the ones reported in Fig. 8 of [43]`: Again, if authors wish to compare their results with model from [43], why not running forecast with CNN from reference [43] in the same condition? As Authors, in this paper, are re-using CNN from ref. [43], it should then be possible and prevent readers to go check paper [43] to make sure the settings are the same and try to grasp the similitude between papers as each paper use different representations of the results. In addition, if CNN performed so poorly in [43], why choosing it as the base model for SM dataset? Why not choose an MLP or nn kernel that seems to perform better on the figure mentioned by authors?
## Model evaluation
`resulting forecast to be visibly pleasing and temporally consistent` Not really a scientific judgement in my opinion…But it is indeed quite similar. Nevertheless, at t=3.70, the velocity of sample 2 to 5 are not really “visibly pleasing”, how should we interpret this?
## Ablation study
In my understanding, the proposal relies on the forecaster and interpolator. Therefore, I would have expected to see an ablation study, without these modules to judge of their importance. Is it impossible to do so?
Is it not possible also to do ablation study with different frame number of the interpolator (k-1)? BTW, unless I missed it, we don’t know what the value of k for each dataset is. Does it have an impact on the performance?
## Proof-read
I found some points to be corrected and authors should pay some more attention on possible other typos or issues:
* Line 25. and and require -> and require
* Line 92, should not it be x_{t+1}:{t+h}
* Line 196, Sentence is a bit long, consider breaking it
* Line 271, In the spring mesh dataset -> In the Spring Mesh dataset
* Line 288, whether it is actually able -> whether it is actually possible
* Line 302/303, For the Navier Stokes and spring mesh dataset it was sufficient -> For the Navier Stokes and Spring Mesh dataset, it was sufficient
* Line 327, could be the reason for why -> could be the reason why
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Authors discuss their results and limitations as well as future directions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would first like to thank you for the positive comments and valuable feedback. We respond to your comments and questions below.
**Q1:** _Reproducibility_
**A1:**
1. **Code:** We have shared with the AC an anonymous link to our code for reproducibility. We will open-source our code and data when the manuscript is published (noted in the submission questions),
2. **Navier-Stokes and Spring Mesh set-ups:** We include instructions on how to reproduce our experimental results in the code README. We note that our evaluation procedure is *identical* to the benchmark dataset paper [43] (i.e. the same validation/test sets and the same metrics), except that we adapt it to probabilistic models (e.g. sample multiple forecasts per initial condition, compute CRPS and SSR). We will make it clear that we always train on the full training datasets of [43] and that we use the Navier-Stokes with 4 obstacles. We will note all these details in the revised appendix.
**Q2:** _Performance comparison_
**A2:**
> Why not include Navier-Stokes out-of-distribution test set results in the appendix and let readers be the judge and make their own opinion?
This is a good point. We have added these results to the revised appendix, and also show them in Table 10 of the joint rebuttal PDF.
**Q3:** _Comparison with reference [43]_
**A3:** We appreciate this feedback and have revised the manuscript to describe the experimental setup for Navier-Stokes and spring mesh in more detail (see response 2. of **A1** above).
However, we would like to point out that [43] is a dataset paper. All baselines therein are ***deterministic*** models, and as such is not an adequate baseline for the ***probabilistic*** forecasting setup that we study in this paper.
Nonetheless, **please see our Figure 7 in the joint rebuttal PDF, where we reproduce the relevant models from [43] and show that we substantially outperform them with all our baselines in terms of MSE**.
Our Navier-Stokes and Spring Mesh ensemble mean MSE scores can be best compared to Fig. 10 (bottom; step prediction Unet with 4 obstacles) and Fig. 8 (bottom; step prediction CNN), respectively, of [43]. While the authors of [43] have not provided tabular results, comparing these figures reveals that 1) our reported MSE scores for Navier-Stokes ($0.022$) are competitive to the ones from Fig. 10 in [43], and 2) our spring mesh MSE ($4.74e-4$) is much lower than any reported MSE in Fig. 8 of [43], where none of the step prediction models achieve MSE less than $10^{-2}$. We have reached out to the authors from [43] to get exact values for these results so that we can report them in our paper as supplementary results.
While helpful, this comparison is insufficient on its own, as MSE can only be part of the evaluation of a probabilistic forecasting model, since it does not capture the skill of the forecasted distribution like the metrics CRPS and SSR.
Additionally, please note that our Perturbation baseline can be used by the reader to assess how direct applications of the models from [43] perform on our probabilistic skill evaluation. Indeed, except for the baselines being trained to forecast multiple steps, our Perturbation baseline, which forecasts deterministically (see lines 649-653), for Navier-Stokes is identical to the UNet used in [43].
> In addition, if CNN performed so poorly in [43], why choosing it as the base model for SM dataset? Why not choose an MLP or nn kernel that seems to perform better on the figure mentioned by authors?
Diffusion models usually use UNet backbones, so to stay as close as possible to that, we chose the CNN model. In addition, our results show that the spring mesh CNN can actually perform very competitively when employing stochastic multi-step training as done for our baselines.
**Q4:** _Model evaluation_
**A4:**
> _"resulting forecast to be visibly pleasing and temporally consistent"_. Not really a scientific judgement in my opinion
We believe that a qualitative evaluation of the forecasted videos is important to complement the reported metrics.
> Nevertheless, at t=3.70, the velocity of sample 2 to 5 are not really “visibly pleasing”, how should we interpret this?
Such behavior is to be (sometimes) expected for long-rollouts of a probabilistic forecasting model. This is because we hope that the probabilistic model can capture any of the possible, uncertain futures (instead of forecasting their mean, as a deterministic model would do). As such it is reassuring that our samples show sufficient variation but also covers the ground truth (sample 1).
**Q5:** _Ablation study_
**A5:**
> In my understanding, the proposal relies on the forecaster and interpolator. Therefore, I would have expected to see an ablation study, without these modules to judge of their importance
The first sentence is correct. However, it is not possible to use our framework without both modules, as the sampling procedure relies on both. The Dropout baseline, i.e. a pure forecasting model, is the closest we can get to this.
> Is it not possible also to do ablation study with different frame number of the interpolator (k-1)? BTW, unless I missed it, we don’t know what the value of k for each dataset is.
We have added a table ablating the choice of k for the SST dataset to the appendix and in Table 12 of our joint rebuttal PDF. We report the value of k for each dataset in lines 671-673, but we will also add it to a new row in the hyperparameters table 4.
**Q6:** _Proof-read_
**A6:**
Thank you for helping pointing out several typos – we have fixed these. To clarify, We chose to write spring mesh in lowercase in the main text, because this is what the original dataset paper [43] does (while Navier-Stokes is capitalized).
> Line 92, should not it be x_{t+1}:{t+h}
No. We write $\mathbf{x_{t:{t+h}}}$ because we want to include the initial conditions $\mathbf{x_t}$ in this description (in addition to the to-be-forecasted snapshots $\mathbf{x_{t+1:{t+h}}}$).
---
Rebuttal Comment 1.1:
Comment: Thank you for these clarifications.
I am satisfied with the answers, except with the comparison to [43]. Even if models in 43 are deterministic, if your goal is to have the best single- and multi-step predictions, your probabilistic model should beat probabilistic SOTA and deterministic SOTA. If you consider that deterministic and probabilistic are different domains, then don't mention that your model is doing better than 43 in the first place.
However, I would like to emphasis that all the details and additional explanations provided here should be included in the new revision of the paper (either in the main paper or appendixes).
Authors should give extra attention that all their equations, experiment set-ups and results be easily understandable for other to reproduce them if necessary.
Authors should also be careful of the typos (in the rebuttal document, in the caption of figure 7, the reference should be [43] not [44], right? Same for Table 12? Maybe you added a new reference which change the order and now 43 is 44, but you should double check it and adapt it in the Figure 7 legend).
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and taking the time to carefully read our rebuttal. We are glad to hear that all answers except the following point satisfied you.
> I am satisfied with the answers, except with the comparison to [43]
It is unrealistic to expect a probabilistic model to ``beat’’ both probabilistic SOTA and deterministic SOTA because they are fundamentally different classes of models with very different objectives and evaluation criteria. A probabilistic SOTA is not optimized for point estimates, typically evaluated by MSE. Instead, probabilistic models emphasize on calibration and coveraged, as reflected in CRPS. Therefore, they are not comparable to a deterministic SOTA. **However, we now see the benefit of including a direct comparison to the models from [43] in terms of MSE, which is why we will include the new figure 7 of the joint rebuttal PDF, plus discussion, into our revised paper**. *Please let us know if there is any other benchmarking against deterministic models or [43] that you think is important to include.*
> if your goal is to have the best single- and multi-step predictions
We would like to stress that our goal is NOT to "have the best single-step predictions", but rather we focus on multi-step probabilistic forecasting, and especially long rollouts in this paper. This is an important distinction, because while the models from [43] are appropriate for single-step predictive skill comparison, these models lack stability for long-term forecasts (the focus of this work). We demonstrate this in figures 7a and 7b in our joint rebuttal PDF, where the models from [43] diverge or significantly underperform compared with our method AND baselines in terms of MSE. This challenge is why we focus on multi-step/long-term probabilistic forecasting.
> However, I would like to emphasis that all the details and additional explanations provided here should be included in the new revision of the paper (either in the main paper or appendixes).
Rest assured that we will include all these details to our revised paper or appendix.
> Authors should also be careful of the typos (in the rebuttal document, in the caption of figure 7, the reference should be [43] not [44], right? Same for Table 12? Maybe you added a new reference which change the order and now 43 is 44, but you should double check it and adapt it in the Figure 7 legend).
Thank you for pointing this out, you are correct that [44] in our joint rebuttal PDF is [43] from our submitted main paper. We apologize for this inconsistency that arises because we strived to integrate the reviewers feedback directly into our revised paper. In the final version this will not be a problem. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful comments.
We are particularly encouraged by the reviewers finding our work _”interesting”_ (WiMo and 7msc), _”quite novel”_(7msc) and _”quite promising”_ (XpdM).
We are glad to hear that reviewers found our paper _“well written and easy to follow”_ (7msc) with _“numerical experiments cover multiple non-trivial tasks”_ (7msc) and an _“extensive ablation study”_ (WiMo).
One common issue with our work was insufficient clarity of our methodology section (XpdM) which may have caused misunderstandings with regard to our method being “deterministic” (7msc) or requiring being benchmarked against deterministic baselines (WiMo).
As reviewer 4TEk correctly notes our work _“reimagines the continuous-time probabilistic forecasting problem as a diffusion process”_, and we would like to re-emphasize that our **method was specifically designed to _generate probabilistic_ forecasts**, which is why our baselines and evaluation procedure are for the probabilistic forecasting setting too.
In our joint rebuttal PDF we have added the following new figures and tables:
- Figure 7 shows that deterministic single-step forecasting as in [43] is substantially outperformed by all our own probabilistic multi-step baselines as well as DYffusion. In addition, you can observe that DYffusion performs especially well on long-range forecasts relative to the baselines. These results were requested by reviewer **WiMo** and should also be interesting to reviewer **7msc** who requested to “show comparison against deterministic forecast models”. To attain Fig. 7, we have reproduced the UNet and CNN baselines from [43]. The only difference between them and our Dropout-multi-step baseline (called “Dropout” in our paper) is that the latter is trained to forecast multiple steps and has inference dropout enabled.
- Table 10 shows that cold sampling (Alg. 2) substantially outperforms naive sampling, especially on the SST dataset. Cold sampling was proposed by [2], while naive sampling is their generalization of DDPM sampling for “generalized diffusion models”. Naive sampling corresponds to replacing line 4 in Alg. 2 with: $\mathbf{x_{t+i_{n+1}}} = \mathcal{I_\phi}(\mathbf{x_t}, \hat{\mathbf{x_{t+h}}}, i_{n+1})$. This finding is consistent with [2] who find that naive sampling “works well for noise-based diffusion” but “yields poor results” for generalized diffusion models, and should be especially interesting to reviewers **7msc** and **XpdM**.
- Table 11 provides the ablation study requested by reviewer **WiMo** on ablating the value of k, i.e. the number of artificial diffusion steps used by DYffusion, for the SST dataset.
- Table 12 provides the Navier-Stokes out of distribution results requested by reviewer **WiMo**.
We are of the opinion that the valuable feedback provided by the reviewers has significantly enhanced the quality and clarity of our paper. As a result, the core contributions of our work, centered around the introduction of a pioneering framework for probabilistic dynamics forecasting and complemented by robust empirical evidence and an exhaustive ablative study, have been elevated.
Pdf: /pdf/3d15e3518d4867b76fce08c16bd1870b879dc05f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Full-Atom Protein Pocket Design via Iterative Refinement | Accept (spotlight) | Summary: In this paper, the authors proposed a Full-Atom Iterative Refinement framework (FAIR) for protein pocket sequence and 3D structure co-design. Generally, FAIR has two refinement steps (backbone refinement and full-atom refinement) and follows a coarse-to-fine pipeline. The influence of side-chain atoms, the flexibility of binding ligands, and sequence-structure consistency are well considered and addressed. Extensive experiments on two datasets show the advantage of FAIR in generating high-quality protein pockets.
Strengths: 1. The paper is well-written and easy to follow. Existing related works are well-discussed.
2. Figure 1 and 2 clearly illustrate the protein pocket design problem and the proposed method FAIR.
3. As far as I know, this is the first paper that studies protein pocket sequence-structure co-design with deep learning methods. The importance and background of the problem are well stated. The problem is well formulated in Sec. 3.1.
4. The coarse-to-fine architecture as well as the full-shot iterative refinement schemes are reasonable and effective. The modeling of side-chain atoms, sequence-structure consistency, and the flexibility of binding ligands are well considered.
5. Extensive experiments on CrossDocked and Binding MOAD dataset compared with 5 representative baselines show the effectiveness of FAIR. The code of FAIR is also provided for reproduction.
Weaknesses: 1. In the experiments, the authors redesign all the residues that contain atoms within 3.5 Å of any binding ligand atoms. Can FAIR design larger regions of the protein pocket containing more residues?
2. The authors may explore the influence of initialization strategies on FAIR.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can other protein design methods or structure-based drug design methods be adapted to the studied pocket design problem?
2. Why there is no hyperparameter weight to balance the two loss functions, Equation 9&10?
3. Can FAIR be leveraged for pocket optimization tasks?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations of FAIR is well discussed in Appendix D.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your appreciation and suggestions! Following your suggestions, we added new experiments, clarifications of formulations, and analyses. This revision has considerably improved our initial submission thanks to your constructive comments. We would love to know what you think about our response and if there is anything else we can do to improve the paper. We would greatly appreciate your considering increasing the score. Many thanks!
**Comment 1:** In the experiments, the authors redesign all the residues that contain atoms within 3.5 Å of any binding ligand atoms. Can FAIR design larger regions of the protein pocket containing more residues?
**Response 1:** FAIR can design larger regions of protein pocket with more residues. In our experiments, we redesign all the residues that contain atoms within 3.5 Å of any binding ligand atoms in the default setting considering the distance ranges of protein-ligand interactions [r1]. There are an average of 8 residues for each protein pocket. Here, we perform further experiments to design all the residues that contain atoms within 5.0 Å of any binding ligand atoms, leading to an average of around 22 residues for each pocket. It is more challenging to design pocket with more residues.
| Model | CrossDocked AAR(↑) | CrossDocked RMSD(↓) | CrossDocked Vina(↓) | Binding MOAD AAR(↑) | Binding MOAD RMSD(↓) | Binding MOAD Vina(↓) |
|----------------|-------------------|--------------------|---------------------|---------------------|----------------------|---------------------|
| FAIR (3.5 Å) | 40.17±12.6% | 1.42±0.07 | -7.022±1.75 | 43.75±15.2% | 1.35±0.10 | -7.978±1.91 |
| FAIR (5.0 Å) | 35.68±11.7% | 1.63±0.10 | -7.045±1.71 | 39.86±14.0% | 1.52±0.09 | -7.889±1.84 |
We observe that FAIR is generally robust to the number of residues to design: the AAR, RMSD, and Vina in 5.0 Å is comparable with 3.5 Å. Therefore, FAIR can design larger regions of a protein pocket. We will conduct more analysis in the appendix of the final paper.
[r1] Gilles Marcou and Didier Rognan. Optimizing fragment and scaffold docking by use of molecular interaction fingerprints. Journal of chemical information and modeling, 47(1):195–207, 2007
**Comment 2:** The authors may explore the influence of initialization strategies on FAIR.
**Response 2:** As shown in Appendix A.4, we initialize the residue coordinates with
linear interpolations and extrapolations based on the nearest residues with known structures in the protein. For comparison, we initialize the residue coordinates with their corresponding nearest residues.
| Model | CrossDocked AAR(↑) | CrossDocked RMSD(↓) | CrossDocked Vina(↓) | Binding MOAD AAR(↑) | Binding MOAD RMSD(↓) | Binding MOAD Vina(↓) |
|----------------|-------------------|--------------------|---------------------|---------------------|----------------------|---------------------|
| FAIR (linear interpolation) | 40.17±12.6% | 1.42±0.07 | -7.022±1.75 | 43.75±15.2% | 1.35±0.10 | -7.978±1.91 |
| FAIR (nearest residue) | 34.26±12.3% | 1.83±0.09 | -6.850±1.84 | 36.86±14.0% | 1.79±0.12 | -7.743±1.70 |
We can observe that the structure initialization strategies indeed have an influence on the performance. FAIR with the linear interpolation initialization strategy has a better performance than the nearest residue. We will add more discussions of initialization strategies to our final version.
**Comment 3:** Can other protein design methods or structure-based drug design methods be adapted to the studied pocket design problem?
**Response 3:** As discussed in Sec.2 lines 97-101, structure-based drug design can be regarded as the dual problem of pocket design studied in our paper. However, they focus on generating 3D molecular graphs based on the fixed protein pocket structure and can hardly be adapted to our pocket sequence-structure co-design problem. We will organize the related works more clearly in the final version.
**Comment 4:** Why there is no hyperparameter weight to balance the two loss functions, Equation 9&10?
**Response 4:** In experiments, we observe that the two loss functions have roughly the same amplitude, and directly optimizing their sum can work well. Balancing the two loss functions through additional weights (modeled as a hyper-parameter during fine-tuning) may further improve FAIR’s performance. We will discuss the summation of loss functions more in the final version.
**Comment 5:** Can FAIR be leveraged for pocket optimization tasks?
**Response 5:** That is a great suggestion, thank you. Indeed, FAIR can be leveraged for pocket optimization tasks. FAIR is a general architecture and can be combined with popular optimization methods for pocket optimization. For example, we can finetune the generation process of FAIR with reinforcement learning algorithms e.g., PPO [r2] to optimize the properties of the designed pockets. As mentioned in Appendix.D, we will explore pocket optimization tasks in the future.
[r2] Schulman J, Wolski F, Dhariwal P, et al. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
---
Rebuttal Comment 1.1:
Title: Reply to the author
Comment: Thank the authors for your comprehensive and insightful rebuttal. After reading your responses, most of my concerns and confusions have been addressed. I would like to increase my score from 7 to 9. I kindly request that all the modifications, explanations, and discussions outlined in the rebuttal be fully integrated into the final version of the paper.
---
Reply to Comment 1.1.1:
Title: Reply to the reviewer
Comment: Thank you very much for raising the score. We are glad that our response addressed your concerns. In the final version, we will be sure to include the modifications, explanations, and discussions.
Thank you very much,
Authors | Summary: The paper introduces FAIR, the pipeline for protein pocket sequences and 3D structures co-design. It's important in drug design applications, since most of the small molecule drugs (ligands) bind their targets (proteins) inside the pockets. Curranty existing methods have disadvantages (inefficient generation, inability to generate side chains, etc.), that FAIR overcomes. FAIR demonstrates promising results, as proven through thorough and comprehensive experiments.
Strengths: Originality: The task of protein pocket design is not new and there are several deep learning methods that have already contributed to this field. However, the submission provides new insights and solve yet unsolved issues showing a better performance than previous methods. Related works are cited.
Quality: The work is complete and technically sound. Authors support the claims and provide a baseline of FAIR performance. Authors compare FAIR with other methods and show the advantages of FAIR. The paper provides a thorough analysis of strong and weak sides of FAIR. It includes ablations studies and description of how to further improve the approach.
Clarity: The text is written clearly. It contains all the necessary citations. The submission provides a comprehensive explanation and description of all methods and approaches including the technical details of experiments and equations.
Significance: It's very important to for drug discovery to be able to design protein pockets since most of the small molecule drugs bind in the pockets. FAIR provides additional contribution by solving the issue with side chain and flexibility design.
Weaknesses: The paper lack of speed evaluation or discussion of the method (FAIR).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Can you please provide the information about FAIR speed (how much time does FAIR need to finish one protein co-design)? Is it possible to use it high-throughput (apply for thousands or millions of proteins)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The paper provides sophisticated discussion of limitations and future works.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your appreciation and suggestions! We are really grateful for your feedback and acknowledgment of FAIR’s novel contributions and experiments.
**Comment 1:** Can you please provide the information about FAIR speed (how much time does FAIR need to finish one protein co-design)? Is it possible to use it high-throughput (apply for thousands or millions of proteins)?
**Response 1:** Thanks for the suggestion! In Figure 4(C), we show the comparisons of average generation time for 100 pockets with baseline methods. We can observe that FAIR is much faster than traditional methods and need less than 1 second to finish one protein pocket co-design on average.
As shown in our code https://anonymous.4open.science/r/FAIR-9691, the generation process of FAIR can be parallelized to apply for high-throughput pocket generation. We will add more discussions in our final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and for addressing the questions. I'm satisfied with the response and have no further suggestions. I recommend accepting the paper.
---
Reply to Comment 1.1.1:
Title: Reply to the reviewer
Comment: Thanks for your appreciation and support! We are glad that our response addressed your questions.
Thank you very much,
Authors | Summary: The authors study the 3D protein-ligand interaction problem. They introduce a novel method for designing protein biding pockets conditioned on the ligand structure, termed FAIR. Unlike existing methods, FAIR co-designs sequence and structure of the pocket by iteratively modeling both backbone atoms and side chain atoms. The method also models refines ligands coordinates accounting for its flexibility.
Strengths: The method combines the ideas from many previous ML works dealing with sequence-structure co-design using graph representation. The novelty of this approach is in the iterative refinement that first ensures the stability of the backbone atoms before proceeding to the modeling of the side chain atoms. Thus, the method models the ligand flexibility and the effect of side chains on the residue types. The comparison study includes all the relevant baselines and other methods and the ablation study is extremely useful as it justifies the importance of each block in their architecture.
Weaknesses: Some clarifications in the sections 3.2.1 and 3.2.2 are needed (see questions). Also, the authors should be very careful when using the word "de novo". They need to show how their method can perform de novo design or explicitly say what metrics they use to show that their designed sequence are de novo.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Some minor concerns are listed below.
Atom-level decoder:
1. The way section 3.2.1 is written, it seems that the encoder is using all atoms in the protein-ligand complex. The number of atoms in the protein-ligand complex can be very large resulting in a very large KNN graph. This could cause some memory restrictions. Why not focusing only on the pocket residues?
2. How is the variable number of atoms and atom types present in the ligand (as well as in different side chains) in different training samples handled ?
3. Using this notation, it’s not clear how the same atoms in different residues have different embeddings. Is this handled in the first MLP layer ?
Residue-level encoder:
It’s not clear what coarsening procedure is used for different types of ligands. If it’s treated as a “special” residue, then it’s not clear how a feature vector describing its biochemical properties is formed. How’s the 1-hot encoding containing ligand’s identity and distinguishing it from other ligand molecules formed ? Is there a vocabulary of all the ligands in the training set ? The authors should clarify this.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors claim that their method can do de novo design of the binding pocket. I wish this was shown in the paper. Most examples that are illustrated in the paper are just dealing with the recovery of the existing binding pockets.
One suggestion to illustrate de novo design is by grafting a biding pocket in some existiing protein scaffold in a given location (that was not previously a ligand binding location).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their appreciation and detailed suggestions! If you have any additional questions. Let us know about any other comments we can address. We would be very grateful if you considered increasing the score.
**Comment 1:** The way section 3.2.1 is written, it seems that the encoder is using all atoms in the protein-ligand complex. The number of atoms in the protein-ligand complex can be very large resulting in a very large KNN graph. This could cause some memory restrictions. Why not focusing only on the pocket residues?
**Response 1:** Motivated by the intrinsic hierarchical structure of the protein, we leverage a hierarchical encoder based on 3D graph transformers to encode the hierarchical context information of protein-ligand complexes for pocket sequence-structure co-design. The structure and interactions of protein-ligand atoms play important roles in pocket design [8, 48] so we cannot only focus on residues. To save memory requirements and keep high overall performance, we only consider the protein pocket atoms instead of the whole protein. We will introduce our method more clearly in the final version.
**Comment 2:** How is the variable number of atoms and atom types present in the ligand (as well as in different side chains) in different training samples handled?
**Response 2:** As shown in our code https://anonymous.4open.science/r/FAIR-9691, the computations of the ligand and protein pocket based on torch_geometric are agnostic to the number of atoms and atom types. This is the same for different side chains. With our designed batch operations, protein pocket-ligand complexes with different sizes can be processed in parallel.
**Comment 3:** Using this notation, it’s not clear how the same atoms in different residues have different embeddings. Is this handled in the first MLP layer ?
**Response 3:** In the atom-level encoder of FAIR, the atomic attributes are mapped to node embeddings with MLPs. If two atoms in different residues have the same atom and residue types, they will have the same initial embedding. The atom embeddings will be further updated based on neighboring atoms and 3D structures with the atom-level encoder. We will describe our model design more clearly in the final version.
**Comment 4:** It’s not clear what coarsening procedure is used for different types of ligands. If it’s treated as a “special” residue, then it’s not clear how a feature vector describing its biochemical properties is formed. How’s the 1-hot encoding containing ligand’s identity and distinguishing it from other ligand molecules formed ? Is there a vocabulary of all the ligands in the training set ? The authors should clarify this.
**Response 4:** The residue-level encoder only keeps the alpha carbon atoms of residues.
To supplement binding ligand information, a coarsened ligand node at the
ligand’s center of mass is also considered for the residue-level encoder. The embedding of the coarsened ligand node is initialized by sum pooling the ligand atom embeddings. Therefore, we do not need 1-hot encoding containing ligand’s identity or a vocabulary of all the ligands. In lines 153-154, we stated that the coarsened ligand node is
appended at the end of the residue sequence as a special residue. We treat the coarsened ligand similar to other residues and construct a Kr nearest neighbor graph. We will make our description clearer in the final version.
**Comment 5:** The authors claim that their method can do de novo design of the binding pocket. I wish this was shown in the paper. Most examples that are illustrated in the paper are just dealing with the recovery of the existing binding pockets. One suggestion to illustrate de novo design is by grafting a binding pocket in some existing protein scaffold in a given location (that was not previously a ligand binding location).
**Response 5:** Thanks for the question and suggestion! In our paper, the protein pocket region is masked and FAIR can co-design pocket residue types and structures. By “de novo”, we mean that our method does not rely on existing reference pockets or templates and can generate the pocket from scratch [r1]. We use recovery rate for evaluation as it is a widely used metric established in the protein design field [r2-r4]. Figure 3 in the main text further shows some cases of pocket design, where the generated pocket has higher binding affinity than the references, demonstrating FAIR's ability for de novo pocket design. We agree that grafting a binding pocket in some existing protein scaffold to a given location is a good task for illustrating the ability of de novo design.
However, the grafting task is beyond the scope of our work and requires more techniques such as pocket detection. Due to the limited time of the rebuttal period, we plan to explore the grafting task in the future. We will also clearly discuss this in the final paper to emphasize the current capabilities of the method vs. future extensions.
[r1]Bennett N R et al., Improving de novo protein binder design with deep learning. Nature Communications, 2023.
[r2]Zhangyang Gao et al., PiFold: Toward effective and efficient protein inverse folding, ICLR 2023
[r3]Dauparas, Justas, et al., Robust deep learning–based protein sequence design using ProteinMPNN. Science 2022.
[r4]John Ingraham, et al., Generative models for graph-based protein design, NeurIPS, 2019 | Summary: This paper is the first to introduce a deep learning pipeline for the protein binding pocket re-design task. The architecture consists of a rotation invariant, hierarchical encoder at the residue and all atom level, followed by a hierarchical iterative refinement generative process at the residue and all atom level. The architecture aims to model key inductive biases of the problem at hand, and shows empirical improvements over baselines adopted from similar protein/molecule design tasks.
Strengths: - Overall sensible pipeline and inductive biases for all-atom protein pocket redesign.
- The FAIR pipeline consists of a residue level, rotation-invariant encoder followed by all atom level encoder to encode the protein in a hierarchical manner, after which new residues are placed and iteratively refined in a hierarchical, rotation-equivariant manner. During iterative refinement, the atom positions of the ligand (small molecule) are also updated to incorporate ligand flexibility.
- From a biochemical perspective, considering all-atom/atom level encoding and generation is probably very important for binding, as it is the sidechain orientations that determine ligand binding.
- These are all sensible inductive biases -- rotation symmetry, hierarchy, iterative redesign, ligand flexibility -- and have been described clearly.
- None of these are particularly novel components, because the technical problems have been solved in other work, but this is an effective application of existing ideas to a novel problem of all-atom pocket redesign.
- Well written paper.
- Clear description of each architectural components as well as experimental methodology.
- I really loved the paragraph starting at line 294 onwards. This paragraph (and the overall paper) does at good job at showing how each architectural component of FAIR improves over existing papers/frameworks and is adapted to the specific problem at hand.
- I do wish that the authors found a way to not make the reader jump to the appendix so many times.
- Comparison to baselines.
- Care has been taken to adapt existing papers/methods to the new pocket design task in order to provide baselines.
Weaknesses: - New task may not require major new technical innovations.
- While the binding pocket re-design task is new in the deep learning literature to the best of my knowledge, it seems that adapting existing architectural ideas and putting them together in a smart pipeline (see strengths) works well.
- From a machine learning perspective, I could not identify a new technical problem that previous works have not solved. (I may be missing something.)
- Unsure whether pocket re-design is relevant to enzyme design.
- I will caveat this weakness by saying I am not a domain expert at all.
- I could immediately see this pipeline be applied for general binding pocket re-design and biosensor applications. One perhaps starts out with a structure of a protein-ligand complex and wants to improve binding affinity. However, perhaps this setup is a bit of a stretch for enzyme design.
- For enzymes, we really would ideally like to keep the binding site conserved instead of re-designing it, based on my understanding of the field. In theory, it is of course nice if you can design a new binding pocket to discover new kinds of chemistry. However, as far as I am aware, basic scientists do not currently understand enough about catalytic mechanisms to be able to design a new one from scratch.
- Thus, designing a new pocket and **claiming that it catalyzes the same reaction via the same/different mechanism** seems very hard to tackle from a basic science perspective, let alone computationally. Reasons: lack of data and proper annotation of enzymatic mechanisms.
- Hard to evaluate binding pocket re-design.
- I felt it was overall challenging to design evaluation metrics and setups for this task, because it is unclear whether recovering the RMSD and/or AA identities of the original ligand binding site is the best outcome for these models.
- I have several questions about the Vina docking score evaluations.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - What is the Vina docking score of the training set?
- It would be useful to have this as a baseline to compare all the methods to.
- Figure 3 provided Vina score for the reference structures. I'd be keen to see the average Vina score across the train/validation/test sets as an entry in Table 1, too.
- Can you also show Vina docking scores for all the models without re-docking/relaxation?
- In the small molecule generation literature, it has been found that many generative models may generate very unrealistic ligand poses and that the Vina score without relaxation is poor.
- Paragraph on line 302 onwards - Why are some residues used more and some lesser by the model? Is this observation related to binding, eg. hydrophobicity properties of residues?
- If you are mentioning these observations, it may be worth explaining why...
- How do we know the binding pose of the ligand in advance in real-world design scenarios? Do we always need to start with an existing (crystal) structure of the protein-ligand complex?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed technical limitations but not potential negative societal impact.
Overall, I would encourage the authors to further discuss WHY binding pocket re-design is a meaningful real-world task in more detail that given at present, how it is relevant for designing bio-sensors and/or enzymes (probably I am wrong in my understanding re. enzyme design), and how we can evaluate these tasks in a meaningful manner in-silico. What are some limitations which we cannot address computationally? Perhaps this may not be very relevant if we are judging this paper only from the machine learning perspective, but better contextualizing this project beyond just ML could be useful to readers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive comments! We hope our detailed response and added experiments better highlight FAIR’s novel contributions. If any remaining questions/concerns make you hesitate to raise the score, we would be grateful if you let us know so we could further improve our work.
**Due to the limits of the rebuttal, we show additional responses to some comments in the global response.**
**Comment 1:**
While the binding pocket re-design task is new in the deep learning literature to the best of my knowledge, it seems that adapting existing architectural ideas and putting them together in a smart pipeline (see strengths) works well.
**Response 1:** Thanks for the question! In this paper, we study protein pocket design that designs the pocket sequence and structure conditioned on the binding ligand molecule and protein scaffold context. Such a new task brings a series of challenges and it is non-trivial to propose an effective method. For example, most of the previous methods only generate the protein backbone atoms while neglecting the sidechain atoms, which play important roles in protein pocket-ligand interactions. We proposed a novel two-step coarse-to-fine generation procedure that well consider the sidechain atoms. Moreover, the binding ligand molecules are flexible, which is omitted in previous works. FAIR learns to update the molecular coordinates along with the refinement processes to model the flexibility of the ligands.
Our contributions are summed in lines 63-72 including new tasks, novel methods, and competitive performance. We will state our contributions more clearly in the final version.
**Comment 2:** For enzymes, we really would ideally like to keep the binding site conserved instead of re-designing it, based on my understanding of the field. In theory, it is of course nice if you can design a new binding pocket to discover new kinds of chemistry. However, as far as I am aware, basic scientists do not currently understand enough about catalytic mechanisms to be able to design a new one from scratch.
Thus, designing a new pocket and claiming that it catalyzes the same reaction via the same/different mechanism seems very hard to tackle from a basic science perspective, let alone computationally. Reasons: lack of data and proper annotation of enzymatic mechanisms.
**Response 2:** We agree that the catalytic mechanism of enzymes is complex and is currently not fully understood in biology. The activity of enzymes may also depend on the overall flexibility and electrostatic environment of the protein, making pocket re-designing a challenging task.
However, there are still some successful cases achieving various modification requirements by re-designing the residues in the protein pocket region.
For example, Ulrike Scheib et al., conducted pocket transplantation studies based on the homologous polyamine-binding proteins PotF and PotD. Despite having only 35% sequence identity between PotF and PotD, they demonstrated that by transplanting the pocket of PotD into the pocket region of PotF, it is possible to achieve specific substitutions for small molecule binding specificity [r1].
Moreover, Nicholas F. Polizzi et al., successfully designed six de novo proteins to bind the drug apixaban; two bound with submicromolar affinity with the proposed van der Mer structural units [r2].
In our work, we propose an end-to-end generative framework FAIR for protein pocket design. We agree that more high-quality data and annotations from domain experts may further improve the performance of FAIR. Depending on the requirements, FAIR can be flexibly adapted to downstream applications. For the cases where we would ideally like to keep the catalysis binding site conserved, we can retain the residues directly related to catalysis and use our method FAIR to design other residues related to binding.
We will include these discussions in our final version.
[r1]Scheib U, Shanmugaratnam S, Farías-Rico J A, et al. Change in protein-ligand specificity through binding pocket grafting[J]. Journal of Structural Biology, 2014, 185(2): 186-192.
[r2]Polizzi N F, DeGrado W F. A defined structural unit enables de novo design of small-molecule–binding proteins. Science, 2020, 369(6508): 1227-1233.
**Comment 3:** I felt it was overall challenging to design evaluation metrics and setups for this task because it is unclear whether recovering the RMSD and/or AA identities of the original ligand binding site is the best outcome for these models.
**Response 3:** We agree that it is challenging to comprehensively evaluate the designed binding pocket. In our paper, we use Amino Acid Recovery (AAR), Root Mean Square Deviation (RMSD), and Vina score to evaluate the designed pockets following previous works on protein/antibody design [3,7,11] and structure-based drug design [37,40,49]. These three metrics are currently established and widely used in the field. In the future, we will explore more evaluation metrics. As mentioned in Appendix D, it is also a good idea to carry out wet-lab experiments to validate the effectiveness of the designed protein pockets in the future.
**Comment 4:** What is the Vina docking score of the training set?
**Response 4:** The Vina docking score of the train/validation/test set are -7.035±2.11/-7.063±1.97/-7.016±2.24 for CrossDocked and -8.216 ±2.09/-8.267±1.97/-8.225±2.02 for Binding MOAD. Therefore, the Vina score of the designed pockets by FAIR is comparable to the data set.
**Comment 5:** Why are some residues used more and some lesser by the model? Is this observation related to binding, eg. hydrophobicity properties of residues?
**Response 5:** The observation may be related to the hydrophobicity properties of residues. While other factors such as the train/test set data distribution and the randomness of residue sampling may influence the generated residues. We will conduct more systematic validations and discussions in the future.
---
Rebuttal Comment 1.1:
Title: Follow up Qs
Comment: Thank you for the detailed response. Based on the discussions so far, I'm still not fully convinced by the evaluation.
Re. Response 1: My point was regarding how hierarchical embedding of protein structure (https://arxiv.org/abs/2006.09275) as well as ligand flexibility/co-folding (https://arxiv.org/abs/2209.15171) have come up in other works.
Re. Response 2: I understand, but neither of these citations seem to be about enzymes, correct? Having skimmed through them, they seem to reinforce the point that methods such as FAIR may be useful for biosensing applications where the goal is to bind with high specificity to a molecule.
Re. Response 3 and 4: I understand that these metrics are widely used in the community, but I would like to push the authors to elaborate more about whether these metrics are useful for the pocket redesign task?
- For instance, if we care about RMSD, perhaps consider reporting RMSD w/our re-docking, too.
- If we care about Vina, perhaps elaborate more on how the test set Vina score of all the methods (whether with or without re-docking), including FAIR, is actually not improved over the test set Vina score of the original datasets themselves. This seems especially true without re-docking.
- Please do consider adding a line on the test set Vina score is Table 1, 2, etc.
---
Reply to Comment 1.1.1:
Title: Further Response to Reviewer z6bW (1/2)
Comment: We thank the reviewer for the valuable questions! We have provided detailed responses to your comments. Please let us know whether we have addressed your concerns.
**Comment 1:** My point was regarding how hierarchical embedding of protein structure (https://arxiv.org/abs/2006.09275), as well as ligand flexibility/co-folding (https://arxiv.org/abs/2209.15171), have come up in other works.
**Response 1:** Thanks to the reviewer for mentioning these two seminal works on protein-protein/ligand docking. We will cite and discuss them in our final version. However, our FAIR model differs from the two papers in both ML tasks and neural architecture details.
* First, PAUL (https://arxiv.org/abs/2006.09275) and NeuralPLexer (https://arxiv.org/abs/2209.15171) focus on protein-protein/ligand docking, where the goal is to predict protein-protein/ligand binding structures given the individual protein structures (PAUL) or protein sequences and ligand molecular graphs (NeuralPLexer). In contrast, neither the protein pocket sequence nor its structure are required inputs to our FAIR model. We aim to co-design the pocket sequence and structure, which is not feasible in the mentioned previous methods.
* Secondly, FAIR and PAUL have similar hierarchical architecture designs. However, FAIR is based on a hierarchical graph transformer, which is easier to achieve geometric equivariance than 3D CNNs used in PAUL. Moreover, we use different features for protein representation (more details in Appendix A.2).
* Thirdly, FAIR and NeuralPLexer use different strategies to model ligand flexibility. FAIR leverages iterative refinement and is more efficient than diffusion methods adopted by NeuralPLexer (see Figure 4(c)).
**Comment 2:** I understand, but neither of these citations is about enzymes, correct? Having skimmed through them, they reinforce the point that methods such as FAIR may be useful for biosensing applications where the goal is to bind with high specificity to a molecule.
**Response 2:** FAIR is applicable to enzyme design. Due to the word limit of the rebuttal, we provide additional related work on enzyme design with rational design and computational methods here [r1-r4]. Depending on the requirements, FAIR can be adapted for diverse downstream applications, including enzyme and biosensor designs. For example, we can retain the conserved residues directly related to catalysis and use FAIR to design other residues related to binding for higher design success rates.
We will include the related papers and discussions in our final version.
[r1] Privett H K, Kiss G, Lee T M, et al. Iterative approach to computational enzyme design. Proceedings of the National Academy of Sciences, 2012, 109(10): 3790-3795.
[r2] Xie W J, Asadi M, Warshel A. Enhancing computational enzyme design by a maximum entropy strategy. Proceedings of the National Academy of Sciences, 2022, 119(7): e2122355119.
[r3] Mirts E N, Petrik I D, Hosseinzadeh P, et al. A designed heme-[4Fe-4S] metalloenzyme catalyzes sulfite reduction like the native enzyme. Science, 2018, 361(6407): 1098-1101.
[r4] Broom A, Rakotoharisoa R V, Thompson M C, et al. Ensemble-based enzyme design can recapitulate the effects of laboratory directed evolution in silico. Nature Communications, 2020, 11(1): 4808. | Rebuttal 1:
Rebuttal: **Global response to all reviewers:**
We thank the reviewers for their appreciation and valuable comments! Generally, the reviewers find our paper a novel approach for designing protein pockets that bind to ligand molecules. In the rebuttal, we have done additional experiments and added more discussions and clarifications. With the constructive suggestions from reviewers, we believe our paper will be improved after the rebuttal period!
In the response, we use [1], [2] ... to refer to the reference papers in the original paper and use [r1], [r2] ... to indicate the additional references in the rebuttal.
**Additional response to Reviewer z6bW:**
**Comment 6:** Can you also show Vina docking scores for all the models without re-docking/relaxation?
**Response 6:** Here we show the directly computed Vina scores for all the models without redocking and relaxation (Vina w/o redocking). For a fair comparison, we exclude the baseline PocketOptimizer here because it uses re-docking and force field relaxation methods in its generation procedure. Generally, the Vina scores w/o redocking and relaxation are higher (lower binding affinity). Our method FAIR can achieve the lowest Vina score on different settings and datasets. It can be attributed to FAIR’s strong capability of pocket sequence-structure co-design. Meanwhile, the coordinates of ligand molecules are updated during the refinement process. Therefore, FAIR relies less on redocking and relaxation.
| Model | CrossDocked Vina w/o redocking (↓) | CrossDocked Vina(↓) | Binding MOAD Vina w/o redocking(↓) | Binding MOAD Vina(↓) |
|----------------|-------------------|--------------------|---------------------|---------------------|
| DEPACT |-5.820±2.16 | -6.670±2.13|-6.486±2.23 |-7.526±2.05 |
| HSRN |-5.449±2.01 | -6.565±1.95| -5.870±2.04| -7.349±1.93|
| Diffusion |-5.364±1.88 | -6.725±1.83 |-6.358±2.19|-7.724±2.36 |
| MEAN |-5.672±1.79 | -6.891±1.86 | -6.320±1.88|-7.651±1.97|
| FAIR |**-6.365±1.67**| **-7.022±1.75**|**-7.253±1.72** |**-7.978±1.91**|
**Comment 7:**
How do we know the binding pose of the ligand in advance in real-world design scenarios? Do we always need to start with an existing (crystal) structure of the protein-ligand complex?
**Response 7:** FAIR does not require the binding pose of the ligand in advance for pocket design because it is able to refine the ligand coordinates along with the pocket generation process. In practice, we can initialize the ligand poses with chemical tools (e.g., rdkit ) or take existing binding ligand structures as a reference if available [8,45]. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The Full-Atom Iterative Refinement framework (FAIR) is a novel approach for designing functional proteins that bind with specific ligand molecules. FAIR consists of two steps: full-atom generation and 3D structure co-design. It uses a coarse-to-fine pipeline, updating residue types and structures together in each round. FAIR outperforms baselines in efficiently designing high-quality pocket sequences and structures, with average improvements on AAR and RMSD exceeding 10%.
Strengths: 1. This paper investigates protein pocket design, which determines the pocket structure and sequence based on the context of the protein scaffold and the binding ligand molecule.
2. It provides an end-to-end generative framework called FAIR that uses iterative refinement to co-design the pocket sequence and structure. FAIR solves the drawbacks of earlier research and effectively considers sidechains, ligand flexibility, and consistency of sequence structure for effective prediction.
3. FAIR outperforms baseline techniques in terms of various pocket design parameters. The gains on AAR and RMSD are often over 10%. FAIR generates data more than ten times quicker than conventional techniques.
Weaknesses: Figure 2 of the paper shows that the proposed FAIR adopts a two-stage learning process, so it is not an end-to-end deep learning framework. Therefore, a necessary concern is the training time of the proposed method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Since Transformer with considerable parameters is used, please try to compare the parameters of the proposed method and other deep learning-based methods in the experimental section.
2. As stated in line 196 and 199, the authors use two feed-forward neural networks (g and g’) to encode internal and external interactions, respectively. Apart from the difference in the output channels, are they modeled with the same structure? If it is the same, then try to explain why two identical models (refer to Eqs. (5) and (6)) are used to model different interactions?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The method proposed in the paper utilizes a two-stage iterative refinement strategy. However, the internal logic of these two refinement stages is not well correlated. Alternatively, why can't components such as the masking mechanism of the second stage be directly injected into the first stage?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable questions! We have provided detailed responses to your comments. Please let us know whether we have addressed your concerns. We will be very grateful if you consider increasing scores to support our work.
**Comment 1:** Figure 2 of the paper shows that the proposed FAIR adopts a two-stage learning process, so it is not an end-to-end deep learning framework. Therefore, a necessary concern is the training time of the proposed method.
**Response 1:** Figure 2 shows that there are generally two modules in FAIR, i.e., the backbone refinement and the full-atom refinement modules. However, the two main modules in Figure 2 do not indicate a two-stage learning process. Instead, the training and inference of FAIR are in an end-to-end fashion similar to other deep learning-based baselines. In experiments, it takes around 20 hours to train a FAIR model on a single V100 GPU, which is comparable to MEAN, HSRN, and Diffusion. The results of the generation efficiency comparison in Figure 4(c) further show the advantages of our method.
We will illustrate our method more clearly in the final version.
**Comment 2:** Since Transformer with considerable parameters is used, please try to compare the parameters of the proposed method and other deep learning-based methods in the experimental section.
**Response 2:**
Here we show the number of parameters of FAIR and the other deep learning-based baseline models. We can see that FAIR has comparable or fewer parameters than the baseline methods. Besides the transformer schemes, other factors such as model architecture, hidden dimension size, and the number of layers influence the number of parameters. Generally, FAIR is a light and efficient model.
| Model | HSRN | Diffusion | MEAN | FAIR |
|----------------|-------------------|--------------------|---------------------|---------------------|
| Parameters |8.77M | 4.00M| 0.70M |0.73M|
**Comment 3:** As stated in lines 196 and 199, the authors use two feed-forward neural networks (g and g’) to encode internal and external interactions, respectively. Apart from the difference in the output channels, are they modeled with the same structure? If it is the same, then try to explain why two identical models (refer to Eqs. (5) and (6)) are used to model different interactions?
**Response 3:**
As discussed in lines 192-194, according to previous works [32, 12, 10], the internal interactions within a protein and the external interactions between a protein and a ligand have different properties. For this reason, FAIR uses two separate modules for interaction predictions. As for Equ. (5), it focuses on interactions within protein residues. The input are the residue embeddings and the distance encodings of pairwise alpha carbon distances. There are four output channels for four backbone atoms. As for Equ. (6), it focuses on the external interactions between the pocket and ligand. The input are atom embeddings and corresponding distance encodings. There is only one output channel. Due to the differences in input, output, and modeling interactions, we cannot use a single network.
We will clarify and improve the description of the method in the final version.
**Comment 4:** The method proposed in the paper utilizes a two-stage iterative refinement strategy. However, the internal logic of these two refinement stages is not well correlated. Alternatively, why can't components such as the masking mechanism of the second stage be directly injected into the first stage?
**Response 4:** Thanks for the question! FAIR is designed to have two main stages that follow a coarse-to-fine pipeline: FAIR firstly only models the backbone atoms of pockets to generate the coarse-grained structures and then fine-adjusts full-atom residues to achieve sequence-structure consistency.
At the first stage, since the pocket residue types and the number of sidechain atoms are largely undetermined, we only model the backbone atoms and residue types.
The masking mechanism is not appropriate for the first stage since the predicted residue types are less stable and reliable in early iterations.
At the second stage, we sample and initialize the residues types and side chain atoms based on the results from the first stage.
With the masking mechanism, the residue type and structure update gradually converge and the residue sequence-structure consistency is achieved.
Therefore, the two refinement stages are well correlated.
We will add more discussions of our algorithm in the final version.
---
Rebuttal 2:
Comment: I have read the reply and appreciate the author's reply. My questions are mainly resolved. Nice work.
---
Rebuttal Comment 2.1:
Title: Reply to the reviewer
Comment: Thanks for your appreciation and reply! We are glad that our rebuttal resolved your questions.
Thank you very much,
Authors | null | null | null | null | null | null |
A Robust Exact Algorithm for the Euclidean Bipartite Matching Problem | Accept (poster) | Summary: The paper presents a randomized algorithm for computing a minimum-cost matching of a bipartite graph induced by two point-sets, A and B, in the Euclidean plane. The best running time for the problem under consideration is n^2polylog(n). When A and B are drawn independently and identically from a fixed probability distribution, the presented algorithm achieves a running time of n^{7/4}polylog(n+D), where D is the ratio between the distance of the farthest pair of points to that of the closest pair of points. The presented algorithm can potentially improve the quasi-quadratic bound under the probability assumption and the assumption that D is not too large. The algorithm generalizes to higher dimensions.
Strengths: The minimum-cost matching is an important combinatorial problem, even on bipartite geometric graphs.
Weaknesses:
* The improvement over existing bounds is only under certain probabilistic assumptions, and assumptions about the relative positions of the points in the plane. Moreover, the presented algorithm is randomized.
* I don’t believe that the results will appeal to the broad audience of NeurIPS. The paper does not motivate the problem well with respect to the scope of NeurIPS. The problem is very restricted.
* The techniques are not very novel; they are adaptations of the Hungarian method plus a clever use of data structures.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could you please explain the relevance of your results to NeurIPS, beyond the importance of the matching problem as a generic optimization problem?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for reviewing our submission.
> Could you please explain the relevance of your results to NeurIPS, beyond the importance of the matching problem as a generic optimization problem?
Minimum-cost bipartite matching is extensively used in many applications in Machine Learning, Computer Vision, and Statistical Inference. We enumerate a few of these below.
**Machine learning applications:** Optimal bipartite matching and its cost is used (i) as a loss function in GANs [1, 2], (ii) within autoencoders [3, 4], (iii) in domain adaptation [5, 6], (iv) clustering [7], and (v) self-supervised learning [8].
**Computer vision applications:** Multi-object tracking [9], object-centric learning [10, 11], object detection [12, 13], image retrieval [14], instance segmentation [15, 16], and vector graphics [17].
**Statistical inference applications:** Two-sample test [18, 19], mutual-independence test [20], and distributional shifts [21].
Computing exact matchings is too expensive in many of these applications. This has motivated machine learning researchers to design approximation algorithms [22--28]. Our contribution is to obtain asymptotically faster exact algorithm for stochastic inputs such as those that arise in some of the applications above.
---
---
**References.**
[1] H. Liu, G. U. Xianfeng, and D. Samaras. "A two-step computation of the exact gan wasserstein distance." In ICML, 2018.
[2] J. Cao, L. Mo, Y. Zhang, K. Jia, C. Shen, and M. Tan. "Multi-marginal wasserstein gan." NeurIPS, 2019.
[3] A. Kosiorek, S. Sabour, Y. W. Teh, and G. E. Hinton. "Stacked capsule autoencoders." NeurIPS, 2019.
[4] S. Kolouri, P. E. Pope, C. E. Martin, and G. K. Rohde. "Sliced Wasserstein auto-encoders." In ICLR, 2018.
[5] Y. Balaji, R. Chellappa, and S. Feizi. "Robust optimal transport with applications in generative modeling and domain adaptation." NeurIPS, 2020.
[6] C. Lee, T. Batra, M. Haris Baig, and D. Ulbricht. "Sliced wasserstein discrepancy for unsupervised domain adaptation." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019.
[7] X. Yang, C. Deng, K. Wei, J. Yan, and W. Liu. "Adversarial learning for robust deep clustering." NeurIPS, 2020.
[8] X. Wen, B. Zhao, A. Zheng, X. Zhang, and X. Qi. "Self-supervised visual representation learning with semantic grouping." NeurIPS, 2022.
[9] Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, and X. Wang. "Bytetrack: Multi-object tracking by associating every detection box." In ECCV, 2022.
[10] F. Locatello, D. Weissenborn, T. Unterthiner, A. Mahendran, G. Heigold, J. Uszkoreit, A. Dosovitskiy, and T. Kipf. "Object-centric learning with slot attention." NeurIPS, 2020.
[11] J. Brady, R. S. Zimmermann, Y. Sharma, B. Schölkopf, J. von Kügelgen, and W. Brendel. "Provably Learning Object-Centric Representations." arXiv preprint, 2023.
[12] Y. Wang, and J. M. Solomon. "Object dgcnn: 3d object detection using dynamic graphs." NeurIPS, 2021.
[13] Y. Li, Y. Chen, X. Qi, Z. Li, J. Sun, and J. Jia. "Unifying voxel-based representation with transformer for 3d object detection." NeurIPS, 2022.
[14] Y. Rubner, C. Tomasi, and L. J. Guibas. "The earth mover's distance as a metric for image retrieval." International journal of computer vision, 2000.
[15] B. Dong, F. Zeng, T. Wang, X. Zhang, and Y. Wei. "Solq: Segmenting objects by learning queries." NeurIPS, 2021.
[16] S. Hwang, M. Heo, S. W. Oh, and S. J. Kim. "Video instance segmentation using inter-frame communication transformers." NeurIPS, 2021.
[17] A. Carlier, M. Danelljan, A. Alahi, and R. Timofte. "Deepsvg: A hierarchical generative network for vector graphics animation." NeurIPS, 2020.
[18] M. Imaizumi, H. Ota, and T. Hamaguchi. "Hypothesis Test and Confidence Analysis With Wasserstein Distance on General Dimension." Neural Computation, 2022.
[19] N. Deb, B. B. Bhattacharya, and B. Sen. "Efficiency lower bounds for distribution-free Hotelling-type two-sample tests based on optimal transport." arXiv preprint, 2021.
[20] N. Deb, and B. Sen. "Multivariate rank-based distribution-free nonparametric testing using measure transportation." Journal of the American Statistical Association, 2023.
[21] S. Rabanser, S. Günnemann, and Z. Lipton. "Failing loudly: An empirical study of methods for detecting dataset shift." NeurIPS, 2019.
[22] M. Cuturi. "Sinkhorn distances: Lightspeed computation of optimal transport." NeurIPS, 2013.
[23] J. Altschuler, J. Niles-Weed, and P. Rigollet. "Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration." NeurIPS, 2017.
[24] P. Dvurechensky, A. Gasnikov, and A. Kroshnin. "Computational optimal transport: Complexity by accelerated gradient descent is better than by Sinkhorn’s algorithm." In ICML, 2018.
[25] G. Luise, A. Rudi, M. Pontil, and C. Ciliberto. "Differential properties of sinkhorn approximation for learning with wasserstein distance." NeurIPS, 2018.
[26] N. Lahn, D. Mulchandani, and S. Raghvendra. "A graph theoretic additive approximation of optimal transport." NeurIPS, 2019.
[27] J. Altschuler, F. Bach, A. Rudi, and J. Niles-Weed. "Massively scalable Sinkhorn distances via the Nyström method." NeurIPS, 2019.
[28] P. K. Agarwal, S. Raghvendra, P. Shirzadian, and R. Sowle. "A Higher Precision Algorithm for Computing the $1 $-Wasserstein Distance." In ICLR, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications and the references. I updated my rating to "Borderline Accept". | Summary: This paper studies the Euclidean bipartite matching problem. In the problem, there is a complete bipartite graph on parts A and B, and the cost of edge (alb) is ||a-b||^p. The goal is to compute the minimum cost perfect matching as quickly as possible. This setting is most motivated in the paper by computing the empirical p-Wasserstein distance (though I think the problem is also interesting in its own right). It is important to note that these edge costs are non-integral, in particular they can even be irrational. Many of the sub-quadratic algorithms that work when the edge costs are integral do not apply here.
It was known that the minimum cost matching in this setting can be computed with the Hungarian algorithm in time O(n^3), and in time O(n^2) in geometric settings. The main contribution of this work is showing that in geometric setting (i.e., the vertices of A,B are points in R^d), there exists an algorithm that runs in weakly polynomial sub-quadratic time. The “weakly polynomial” comes from a term in the runtime of log(Delta), where Delta gives the spread (i.e., ratio of the distance between the farthest and closest pair of points in A and B) of the point set, specifically the runtime is \tilde{O}(n^{2-1/(2d)}*phi(n)*log(Delta)), where the phi(n) is some runtime dealing with a nearest neighbor data structure (polylog(n) in d=2). Note that when A and B are sampled from some distribution mu in the unit square, even when mu is unknown, the runtime is \tilde{O}(n^{7/4}*log(Delta)).
The main strategy is to deploy a geometric Hungarian algorithm with a divide and conquer method using the quadtree. However, the divide portion is only helpful when points in the quad tree are not close to the boundary of their children (in some sense that I’m stating very informally). Therefore the worst-case is still just that of the Hungarian algorithm, but we get improvement when these bad boundary cases do not happen.
Strengths: - Improved runtime on an interesting problem.
- Interesting technical insights in using quadtrees with a geometric Hungarian algorithm. Even after this main idea, there are a couple hurdles the authors need to overcome.
- Techniques are relatively practical, and this is exemplified with experiments on both synthetic and real data. This is particularly interesting because a lot of the past work apparently uses complicated, hard to implement data structures.
- I thought the paper was extremely well-presented. The technical difficulties of the paper are really well laid out.
Weaknesses: - It was not totally clear what techniques from the authors are totally new and what is building off of prior work. I would recommend adding a brief discussion of the work by Sharathkumar, at least. My understanding is that the big technical idea of, and then modifying it so the divide and conquer parts actually give progress, is a new perspective. Can you comment more on this?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Other comments to authors:
- (see question in weaknesses)
- Great figures through out
- Nice discussion of high level technical overview at the end of Section 1
- Assuming you’re given another page in your camera ready, I’d add the conclusion section from your Appendix to the main body. The future directions/ open problems you give are nice.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: na
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thorough review and feedback.
> It was not totally clear what techniques from the authors are totally new and what is building off of prior work. I would recommend adding a brief discussion of the work by Sharathkumar, at least. My understanding is that the big technical idea of, and then modifying it so the divide and conquer parts actually give progress, is a new perspective. Can you comment more on this?
**Response.** Your understanding of the main technical idea is correct. The novelty of our algorithm is in placing the classical Hungarian algorithm within a geometric divide-and-conquer framework.
Our algorithm does not rely on any prior work except for a weighted nearest-neighbor based implementation of the Hungarian search step (which was proposed in [1, 2] and used in [3, 4]). Our algorithm also uses a randomly shifted quadtree, which is a popular data structure for the design of geometric approximation algorithms. To the best of our knowledge, our algorithm is the first to use them in the design of an exact Euclidean bipartite matching algorithm.
**Comparison with Sharathkumar [4].** We have included a brief comparison of our result with the result of Sharathkumar [4] in lines 61--66 of our initial submission. We will extend it to also include a comparison of techniques.
From a technical standpoint, apart from using a combinatorial primal-dual approach, the algorithm by Sharathkumar [4] and our algorithm are quite different.
The algorithm in [4] uses the cost scaling framework of Gabow and Tarjan [5, 6] to find an approximate solution. Using the properties of this approximate solution, they 'trap' the edges of the optimal matching in a *planar* graph, i.e., a graph that can be drawn on a plane without any overlapping edges. They then use an algorithm by Lipton and Tarjan [7] to find an optimal matching inside this planar graph.
The proof of correctness of [4] relies on (a) the $2$-dimensional geometry of the input, (b) the edge-costs being square-roots of integers (owing to integer coordinates) and (c) the triangle inequality of Euclidean distances. Thus, their algorithm does not extend to (i) higher dimensions (due to (a) not being satisfied), (ii) to points with real-valued coordinates (due to (b) not being satisfied) or (iii) to the case where edge costs are squared-Euclidean (due to (c) not being satisfied). In contrast, our algorithm extends to all these cases.
---
---
**References.**
[1] P. Vaidya. "Geometry helps in matching." In Proceedings of the twentieth annual ACM symposium on theory of computing, 1988.
[2] R. Sharathkumar, and P. K. Agarwal. "Algorithms for the transportation problem in geometric settings." In Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms, 2012.
[3] P. K. Agarwal, A. Efrat, and M. Sharir. "Vertical decomposition of shallow levels in 3-dimensional arrangements and its applications." In Proceedings of the eleventh annual symposium on Computational geometry, 1995.
[4] R. Sharathkumar. "A sub-quadratic algorithm for bipartite matching of planar points with bounded integer coordinates." In Proceedings of the twenty-ninth annual symposium on Computational geometry, 2013.
[5] Gabow, Harold N., and Robert E. Tarjan. "Faster scaling algorithms for network problems." SIAM Journal on Computing 18, no. 5 (1989): 1013-1036.
[6] Gabow, Harold N., and Robert E. Tarjan. "Faster scaling algorithms for general graph matching problems." Journal of the ACM (JACM) 38, no. 4 (1991): 815-853.
[7] Lipton, Richard J., and Robert Endre Tarjan. "Applications of a planar separator theorem." In 18th Annual Symposium on Foundations of Computer Science (sfcs 1977), pp. 162-170. IEEE, 1977.
---
Rebuttal Comment 1.1:
Comment: Thanks authors for your response! I believe in my initial reading of the comparison to the work of Sharathkumar, I didn't realize there were two different sets of papers (Sharathkumar; Sharathkumar and Agarwal) which confused me in understanding how your work built on these past works. I see now this was a misreading on my part.
I don't believe I will be updating my score (I still believe this paper should be accepted, and that seems in line with basically all of the other reviewers' assessments), but I will continue to monitor the discussion and update my score if need be! | Summary: This paper proposes a new, exact algorithm for solving the Euclidean weighted bipartite matching problem. Here, we have data sets $A,B \subset \mathbb{R}^d,$ each of cardinality $n,$ and the weight of an edge $ab$ is defined to be $\lVert a - b \rVert^p$ for any integer $p\ge 1.$ This formulation is motivated by its direct application to computing empirical $p$-Wasserstein distances.
For data drawn i.i.d. from the same unknown distribution on the unit hypercube, the expected runtime of the algorithm is shown to be weakly-polynomial time, asymptotically in the number of points $n$ and with respect to an additional \emph{spread} parameter $\Delta $, defined to be the ratio of largest and smallest distances between any two points. As $d \to \infty $, this expected runtime approaches that of the classical Hungarian algorithm, up to polylog factors and assuming an efficient data structure for weighted nearest neighbors.
Most of the analysis and all of the experiments focus on the special case where $d=2$, where the improvements over the Hungarian algorithm are more pronounced. The runtime analysis rests on the key observation of Lemma 2.1: if we cut a randomly-shifted square containing the data into four equal pieces, an asymptotically small number of data points will lie near the boundary where these pieces meet. A maximum-weight matching may be constructed by repeated subdivision: optimal matchings in each of the four pieces can be combined to get a feasible solution, and the algorithm searches for "admissible" augmenting paths until the feasible solution can no longer be improved. For a bipartite graph on $2n$ vertices drawn iid unformly from the unit square, the expected runtime of this algorithm is $\tilde O (n^{7/4} \log \Delta )$. For comparison, the Hungarian algorithm in this setting is $\tilde O (n^2)$. Experiments on real and synthetic data show that the new algorithm can substantially outperform the Hungarian algorithm.
Post-rebuttal edit: The authors have responded in a satisfactory manner to my queries. Their algorithm is an interesting addition to the arsenal of matching algorithms, but nevertheless has certain limitations such as dependence on the parameter $\Delta $. For this reason I maintain my high rating of accept.
Strengths: Matching problems are foundational to computer science and optimization. This paper proposes a solution that demonstrates both significant theoretical and empirical advantages in the particular domain of empirical $p$-Wasserstein distance where these problems have been successfully applied. In the class of work proposing improvements to the Hungarian algorithm, the authors have overcome limitations in this domain have included focus on the unweighted case, distributional assumptions, approximate vs exact algorithms, and special cases of the geometric setting studied here.
The runtime analysis is presented very clearly, and theorems are stated carefully. Although the algorithm is ultimately only weakly polynomial, the authors theoretical assertions that the algorithm outperforms the Hungarian method for a range of parameters are backed up convincingly by empirical results.
Another noteworthy feature of the algorithm is that it does not rely on sophisticated data structures like prior work. Indeed, hardly any background at all is needed to understand the proofs and implement the algorithm. This is also reflected in the simplicity of the source code which the authors have shared to aid in reproducibility.
Weaknesses: My main criticism, with a view towards the proposed application of $p$-Wasserstein distance computation, is that the case of _unequal_ distributions is treated mostly as an afterthought. Arguably, this is a more important case in practice since it involves less restrictive assumptions on the data. One finds a few comments addressing this scattered throughout the paper, eg. in the abstract and Remark 3.2. However, to justify the claim that the algorithm performs similarly to the Hungarian algorithm in such cases, it would be beneficial to give experimental results in addition to the asymptotic (as the authors have already done in the equal-distributions case.)
I suggest generally that the authors be somewhat more explicit about the limitations of their algorithm. As already noted in the introduction, computing the empirical Wasserstein distance is mainly tractable in low dimensions. Nevertheless, other readers might be interested in the high-dimensional Euclidean matching problem. I find it difficult to believe that the performance of this method would be better than the purely-combinatorial Hungarian for $d$ large, particularly because of the high cost of dividing each sub-hypercube into $2^d$ pieces.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: line 46: " The execution time of our algorithm is similar to that..." I assume Remark 3.2 is what justifies this remark? It would be helpful to include a forward reference here.
line 59: "and their fast implementations" Don't you mean "its fast implementations"?
line 322: "The dataset consists of the locations" This means latitude and longitude, correct?
line 323: "We filtered the datasets by considering trips" Can you comment more on the choices behind your data filtering and other aspects of experimental design? Specifically, could you report some summary statistics (min, max, median, mean) for the spread of these data sets, both with and without the filtering? It is understandable that some filtering would be needed to see a performance increase over the Hungarian algorithm, and I think the thresholds are reasonable. Still, it would be helpful to understand the effect of filtering on both the runtime and the parameters used to analyze runtime. Moreover, given the filtering step, I think that the suggested improvement $\tilde O( n^{.55})$ is very speculative.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: For scientific limitations, refer to my comments under "Weaknesses". I do not see any potential for these results to have direct adverse effects on society.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thoughtful review and constructive feedback. We address your concerns below.
> My main criticism, with a view towards the proposed application of $p$-Wasserstein distance computation, is that the case of unequal distributions is treated mostly as an afterthought. Arguably, this is a more important case in practice since it involves less restrictive assumptions on the data. To justify the claim that the algorithm performs similarly to the Hungarian algorithm in such cases, it would be beneficial to give experimental results in addition to the asymptotic (as the authors have already done in the equal-distributions case.)
**Response.** Thank you for this question. The case of *equal* distribution also has significant applications in ML. We have highlighted some of these applications in our common response.
We have also conducted experiments for the unequal case.
In our experiment on the NY-Taxi data set (already included in our submission), we sampled $n$ drop-off locations and $n$ pick-up locations and match them. Based on our observation, the pick-up and drop-off locations tend to follow different distributions, where pick-ups seem to have a higher density around Manhattan than the drop-offs.
Furthermore, we conducted additional experiments, where one set is drawn from a Gaussian distribution and the other set is chosen uniformly at random from a unit square. We also included the results of a similar experiment, where one set of points are drawn from the uniform distribution over the unit square and the other set are samples from a Guassian mixture model consisting of $5$ clusters in the $2$-dimensional space. See Figure 1 in the pdf document attached to our general response. We notice an improvement in the efficiency even in these cases.
> I suggest generally that the authors be somewhat more explicit about the limitations of their algorithm. As already noted in the introduction, computing the empirical Wasserstein distance is mainly tractable in low dimensions. Nevertheless, other readers might be interested in the high-dimensional Euclidean matching problem. I find it difficult to believe that the performance of this method would be better than the purely-combinatorial Hungarian for large $d$, particularly because of the high cost of dividing each sub-hypercube into $2^d$ pieces.
**Response.** The divide step of our algorithm takes only $O(dn)$ time. Indeed, a cell has $2^d$ children, which may be much higher than $n$. However, we note that the only sub-problems that we care about are the ones created by the non-empty children, which are at most $n$, i.e., $O(n)$. One can create these sub-problems by simply scanning the points and placing them in the appropriate sub-problem in $O(dn)$ time.
> "We filtered the datasets by considering trips" Can you comment more on the choices behind your data filtering and other aspects of experimental design? Specifically, could you report some summary statistics (min, max, median, mean) for the spread of these data sets, both with and without the filtering? It is understandable that some filtering would be needed to see a performance increase over the Hungarian algorithm, and I think the thresholds are reasonable. Still, it would be helpful to understand the effect of filtering on both the runtime and the parameters used to analyze runtime. Moreover, given the filtering step, I think that the suggested improvement $\tilde{O}(n^{0.55})$ is very speculative.
**Response.** The objective of applying filters is to eliminate erroneous entries in the data, such as entries of trips with negative duration or implausible velocity. To show that the effect of data cleaning step on the result of the experiment was insignificant, we re-executed our algorithm on the NY Taxi dataset, this time applying only two basic filters: (1) the trip duration had to be at least $3$ minutes, and (2) the trip velocity could not exceed $112 mph$. You can see the results in Figure 2 of the PDF file uploaded as part of our general response.
---
Rebuttal Comment 1.1:
Comment: Thank you for the comments, which have clarified my concerns. I will monitor the discussions before reaching a final decision about the paper rating. | Summary: The paper considers matching two sets $A$ and $B$ of $n$ points in the Euclidean space so as to minimize the sum of distances of matched points, when both pointsets are drawn independently and identically from the same (unknown to the algorithm) distribution. The authors extend the well-known Hungarian method with the shifted quad-tree decomposition technique, and improve over the best known result for the worst-case pointsets. The paper is complemented with experimental results.
Strengths: This is an interesting problem to consider and I found the extension of the Hungarian method with the shifted-quad tree technique quite interesting and -- at least to me -- novel. The paper is well written.
Weaknesses: The result of the paper is novel but I am not sure if I would call a better performance for only a special class of inputs an improvement over (slightly worse) results but that hold for general inputs.
I would have preferred if the paper explicitly stated it every time a technique from the literature is used/adapted.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I was convinced by your motivation for studying the problem, but I was wondering if you know of any real-world matching problem where it is natural to have the points drawn by the same distribution?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No limitations applicable and no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thoughtful review and feedback.
> The result of the paper is novel but I am not sure if I would call a better performance for only a special class of inputs an improvement over (slightly worse) results but that hold for general inputs.
**Response.** We do not claim a better performance but a more *robust* performance in comparison to the Hungarian algorithm. Our algorithm has a similar worst-case performance to the Hungarian method but a faster performance for stochastic point sets (see our general response for a discussion on this).
> I was convinced by your motivation for studying the problem, but I was wondering if you know of any real-world matching problem where it is natural to have the points drawn by the same distribution?
**Response.** There are many ML problems that require testing if two sample sets are from the same distribution.
* Distributional shifts: Does the real data set represent the same distribution as the training data from which ML models are built? [1]
* Benchmarks: Do models built using different ML techniques on the same training data represent the same distribution? [2, 3]
* Mutual-independence: Given a multi-variate distribution, are the two marginals of the distribution independent? [4]
The questions above reduce to the *two-sample test problem*: Given two sets of $n$ samples, determine if they are drawn from the same multivariate distribution or different ones. In the *Wasserstein two sample test*, one checks if the optimal matching cost between the samples is below a threshold to determine if they are from the same distribution [5]. The computation of the optimal matching cost can be done using our algorithm.
---
---
**References.**
[1] S. Rabanser, S. Günnemann, and Z. Lipton. "Failing loudly: An empirical study of methods for detecting dataset shift." Advances in Neural Information Processing Systems, 2019.
[2] A. Borji. "Pros and cons of gan evaluation measures." Computer vision and image understanding, 2019.
[3] D. Lopez-Paz, and M. Oquab. "Revisiting classifier two-sample tests." arXiv preprint, 2016.
[4] N. Deb, and B. Sen. "Multivariate rank-based distribution-free nonparametric testing using measure transportation." Journal of the American Statistical Association, 2023.
[5] M. Imaizumi, H. Ota, and T. Hamaguchi. "Hypothesis Test and Confidence Analysis With Wasserstein Distance on General Dimension." Neural Computation, 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for the response and the clarifications. My evaluation remains unchanged. | Rebuttal 1:
Rebuttal: Thank you for the very positive feedback on our work. We want to emphasize a few important points that were also presented in our manuscript and hope that these points may help address some of the reviewers' criticisms.
**Novelty.** The novelty of our algorithm is in placing the classical Hungarian algorithm within a geometric divide-and-conquer framework. Hungarian algorithm conducts a "global Hungarian search" to match each point. In contrast, our algorithm obtains speed-up by trapping shorter edges of the optimal matching in smaller sub-problems (squares) of the quadtree and matching these points using a Hungarian search that is local to the sub-problem. While our asymptotic improvements are shown for stochastic points that are drawn from the same distribution, we do expect our algorithm to perform better when the optimal matching has many edges with small cost, for instance, when points are drawn from two similar but unequal distributions. This is best exemplified by our experiment on NY-Taxi data set (already included in our submission), where we sample $n$ drop-off locations and $n$ pick-up locations and match them. Based on our observation, the pick-up and drop-off locations tend to follow different distributions, where pick-ups seem to have a higher density around Manhattan than the drop-offs. We also conducted two additional experiments on samples drawn from two different distributions (the first using Gaussian and uniform distributions, and the second using a Gaussian mixture model with $5$ clusters and a uniform distribution) and include the results in the one-page pdf submitted as part of our response.
In terms of techniques, our algorithm does not rely on any prior work except for a weighted nearest-neighbor based implementation of the Hungarian search step (which was proposed in [1, 2] and used in [3, 4]). Our algorithm also uses a randomly shifted quadtree, which is a popular data structure for the design of geometric approximation algorithms. To the best of our knowledge, our algorithm is the first to use them in the design of an exact Euclidean bipartite matching algorithm.
**Applications.** There are many ML problems that require testing if two sample sets are from the same distribution.
* Distributional shifts: Does the real data set represent the same distribution as the training data from which ML models are built? [5]
* Benchmarks: Do models built using different ML techniques on the same training data represent the same distribution? [6, 7]
* Mutual-independence: Given a multi-variate distribution, are the two marginals of the distribution independent? [8]
The questions above reduce to the *two-sample test problem*: Given two sets of $n$ samples, determine if they are drawn from the same multivariate distribution or different ones. In the *Wasserstein two sample test*, one checks if the optimal matching cost between the samples is below a threshold to determine if they are from the same distribution [9]. The computation of the optimal matching cost can be done using our algorithm.
Two-sample testing dates back to Leighton and Shor [10], who used it to evaluate the quality of pseudo-random number generators.
Several ML problems generate samples from similar (but not identical) distributions, such as the domain adaptation problem [11] and the training of a Wasserstein GAN [12]. In both instances, our algorithm can be faster than Hungarian when the generated samples are low-dimensional.
---
---
**References.**
[1] P. Vaidya. "Geometry helps in matching." In Proceedings of the twentieth annual ACM symposium on theory of computing, 1988.
[2] R. Sharathkumar, and P. K. Agarwal. "Algorithms for the transportation problem in geometric settings." In Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms, 2012.
[3] P. K. Agarwal, A. Efrat, and M. Sharir. "Vertical decomposition of shallow levels in 3-dimensional arrangements and its applications." In Proceedings of the eleventh annual symposium on Computational geometry, 1995.
[4] R. Sharathkumar. "A sub-quadratic algorithm for bipartite matching of planar points with bounded integer coordinates." In Proceedings of the twenty-ninth annual symposium on Computational geometry, 2013.
[5] S. Rabanser, S. Günnemann, and Z. Lipton. "Failing loudly: An empirical study of methods for detecting dataset shift." Advances in Neural Information Processing Systems, 2019.
[6] A. Borji. "Pros and cons of gan evaluation measures." Computer vision and image understanding, 2019.
[7] D. Lopez-Paz, and M. Oquab. "Revisiting classifier two-sample tests." arXiv preprint, 2016.
[8] N. Deb, and B. Sen. "Multivariate rank-based distribution-free nonparametric testing using measure transportation." Journal of the American Statistical Association, 2023.
[9] M. Imaizumi, H. Ota, and T. Hamaguchi. "Hypothesis Test and Confidence Analysis With Wasserstein Distance on General Dimension." Neural Computation, 2022.
[10] F. T. Leighton, and P. Shor. "Tight bounds for minimax grid matching, with applications to the average case analysis of algorithms." In Proceedings of the eighteenth Annual ACM symposium on theory of computing, 1986.
[11] Y. Balaji, R. Chellappa, and S. Feizi. "Robust optimal transport with applications in generative modeling and domain adaptation." NeurIPS, 2020.
[12] H. Liu, G. U. Xianfeng, and D. Samaras. "A two-step computation of the exact gan wasserstein distance." In ICML, 2018.
Pdf: /pdf/ed1366011805617fcf43dbd1759208cebd67f412.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Projection-Free Methods for Solving Nonconvex-Concave Saddle Point Problems | Accept (poster) | Summary: This paper studies the constrained nonconvex-concave minimax problem. This problem has been studied in several papers in the literature, but this paper proposes a projection free (single loop) algorithm to solve this problem.
Strengths: The proposed algorithms are interesting and extend Frank-Wolf type methods to the nonconvex setting.
Weaknesses: 1. The authors have made some effort into explaining why the FMO oracle might be much more computationally efficient as compared to a projection onto a set. The motivating example seems to be that of the nuclear norm constraint. Can the authors describe in a little more detail as to why this is the case for this constraint. It would make the paper more complete and provide motivation to study projection free methods.
2. Once again, the motivating example of the nuclear constraint does not seem to be addressed in the assumption of $\alpha$-strongly-convex sets. Are these constraints not strongly convex? If this is the case, then there seems to be a mismatch between the motivating examples and the assumptions.
3. What is the relation between the strong convexity of the set and the strong convexity of the objective function? For example, if the inner problem is strongly concave (like in the setting of Theorem 4.4), do we still need to assume the strong convexity of the constraint set?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1 The authors have made some effort into explaining why the FMO oracle might be much more computationally efficient as compared to a projection onto a set. The motivating example seems to be that of the nuclear norm constraint. Can the authors describe in a little more detail as to why this is the case for this constraint. It would make the paper more complete and provide motivation to study projection free methods.**
**A1**
That is an excellent question. Please note that solving a linear optimization problem over the nuclear norm ball requires computing only a single pair of singular vectors corresponding to the largest singular value, whereas computing a projection onto the nuclear norm ball demands a complete SVD decomposition. The computational cost of latter operation is $\mathcal O(kd\min(k,d))$, while the computational cost of the former one is $\mathcal O(\nu \mbox{ln}(k+d)\sqrt{\sigma_1}/\sqrt{\epsilon})$, where $\nu\leq kd$ and $\sigma_1$ are the number of nonzero entries and the top singular value of $-\nabla_x\mathcal L(x,y)$, respectively, and $\epsilon$ is the accuracy (see [R1] for more details). Therefore, in this example, LMO is considerably more cost-effective to compute than the projection method. We will add this discussion to the revised manuscript.
[R1] Combettes CW, Pokutta S. Complexity of linear minimization and projection on some sets. Operations Research Letters. 2021 Jul 1;49(4):565-71.
---
**Q2 Once again, the motivating example of the nuclear constraint does not seem to be addressed in the assumption of $\alpha$-strongly-convex sets. Are these constraints not strongly convex? If this is the case, then there seems to be a mismatch between the motivating examples and the assumptions.**
**A2**
Please note that we only require the strong-convexity set assumption for the maximization problem, i.e., set $Y$. In both motivating examples, the nuclear norm constraint is used for the minimization part of the objective function, where we do not require strong convexity of the constraint set assumption. More specifically, the Continual Dictionary Learning example in our paper effectively exhibits the application of strong convexity of the set $Y$ as it includes an $\ell_2$-norm ball constraint.
Additionally, for the Robust Multiclass Classification example, the constraint of the maximization is the intersection of simplex set and divergence measure constraints. Indeed, one can relax the simplex constraint using the splitting technique and Fenchel duality. The resulting equivalent saddle point problem has a maximization constraint of $Y=${$y:V(y,\frac{1}{n}\mathbf 1_n)\leq \rho$}
which is only described by the divergence measure constraint. In some popular examples such as the Pearson Chi-square divergence, i.e., $V(y,\mathbf{1}_n/n)=||ny-\mathbf{1}_n||^2$, $Y$ satisfies the assumption of strongly convex constraint set. We will add a more detailed discussion in this regard to the revised manuscript.
---
**Q3 What is the relation between the strong convexity of the set and the strong convexity of the objective function? For example, if the inner problem is strongly concave (like in the setting of Theorem 4.4), do we still need to assume the strong convexity of the constraint set?**
**A3**
The definitions of 'strongly convex set' and 'strongly convex objective function' are distinct from each other. For Algorithm R-PDCG, even in strongly-concave setting, we still need the strong convex set for the maximization problem. This assumption plays a critical role in our convergence analysis which is closely related to the analysis of FW-type methods.
To gain a deeper understanding of this assumption, it is helpful to examine the convergence results of FW-type methods for solving strongly convex minimization problems. Classical studies on FW-type methods have shown that, unlike projection-based methods, strong convexity of the objective function does not necessarily lead to an accelerated rate (faster than $\mathcal O(1/K)$) [R2]. Achieving a faster rate often requires imposing additional assumptions, such as the existence of a solution in the interior of the domain or a uniform lower bound on the norm of the gradient of the objective function at the solution.
It is important to note that the extension of most of these assumptions to the min-max problems may not lead to a reasonable assumption since the solution set corresponding to the maximization problem $\mathcal Y^\star(x)=\hbox{argmax}_{y\in Y}\mathcal L(x,y)$ as well as the gradient at the maximizer $\nabla_y\mathcal L(x,y^\star(x))$ changes with respect to $x$. Consequently, our novel analysis using the relatively mild set of assumptions led to convergence results that appeared for the first time in the literature for the considered setting.
[R2] Lan G, Zhou Y. Conditional gradient sliding for convex optimization. SIAM Journal on Optimization. 2016;26(2):1379-409.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I'd like to thank the authors for their response. I have increased my score. | Summary: This paper investigates algorithms for constrained saddle point (SP) problems where the objective function is nonconvex-concave and smooth. Existing methods are usually projection-based and this paper focuses on developing single-loop projection-free algorithms which only use linear minimization oracles.
In particular, this paper provides convergence guarantees for nonconvex-concave SP problems and nonconvex-strongly concave SP problems. This paper also investigates one-sided projection-free methods which can achieve an improved convergence matching the SOTA results for projection-based methods.
Strengths: This paper is well-written and easy to follow. The contributions of this paper are also straightforward: developing projection-free algorithms for saddle point problems and providing convergence guarantees for the proposed methods. By considering the LMO-PO oracle, the rate obtained matches the SOTA convergence of projection-based algorithms.
Weaknesses: 1) When analyzing the convergence guarantees for the R-PDCG method, the author also assumes that Y is S-Convex set. Is this condition inevitable? Also, the rate for R-PDCG is slightly worse than projection-based methods, is this rate improvable, or it is already optimal for projection-free algorithms?
2) It seems a bit strange that the experiment results are shown in the Introduction part. The paper would be more convincing if the authors can present more extensive experiments and additional numerical results.
3) It seems that parameter $\mu$ and $\tau$ are related to parameter $K$; how to set $K$ in practice?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the "Weaknesses" part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors addressed the limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1 When analyzing the convergence guarantees for the R-PDCG method, the author also assumes that Y is S-Convex set. Is this condition inevitable? Also, the rate for R-PDCG is slightly worse than projection-based methods, is this rate improvable, or it is already optimal for projection-free algorithms?**
**A1**
Thank you for raising this question.
We would like to remark that the lower bound complexity for finding an $\epsilon$-stationary of the problem (1) in nonconvex-concave setting is not known yet. However, the complexity of our proposed algorithm CG-RPGA matches with the state-of-the-art single-loop algorithms (see [13]).
On the other hand, classical studies on FW-type methods have shown that, unlike projection-based methods, strong convexity of the objective function does not necessarily lead to an accelerated rate (faster than $\mathcal O(1/K)$) [R1]. Therefore, we conjecture that our complexity result for R-PDCG is indeed optimal. Achieving a faster rate often requires imposing additional assumptions (see [31]), such as the existence of a solution in the interior of the domain or a uniform lower-bound on the norm of the gradient of the objective function at the solution.
It is important to note that the extension of most of these assumptions to the min-max problems may not be reasonable since the maximization problem is parameterized by the minimizer variable $x$. Consequently, our novel analysis using the relatively mild set of assumptions led to convergence results that appeared for the first time in the literature for the considered setting.
[R1] Lan G, Zhou Y. Conditional gradient sliding for convex optimization. SIAM Journal on Optimization. 2016;26(2):1379-409.
---
**Q2 It seems a bit strange that the experiment results are shown in the Introduction part. The paper would be more convincing if the authors can present more extensive experiments and additional numerical results.**
**A2**
Thank you for your suggestions. We will add more experiments to the revised manuscript. Specifically, we have implemented our proposed algorithms to address the Robust Multiclass Classification example and have compared the results with those of competitive schemes. As evident in the attached PDF file, our methods outperform the others, highlighting the advantage of utilizing a projection-free approach. Furthermore, as suggested by the reviewer, we will incorporate a ``Numerical Experiments" section in the revised manuscript and relocate the plots to this section.
---
**Q3 It seems that parameter
$\mu$ and $\tau $ are related to parameter $K$; how to set $K$
in practice?**
**A3**
Thank you for bringing up this point. Please note that the step-size $\tau_k$ and parameter $\mu_k$ can be equivalently selected in terms of the user-specified accuracy $\epsilon>0$. Specifically, in Corollary 4.3, we have $\mu=\mathcal O(\epsilon)$ and $\tau=\mathcal O(\epsilon^5)$. In Corollary 5.2, we find $\mu=\mathcal O(\epsilon)$ and $\tau=\mathcal O(\epsilon^3)$. Therefore, these parameters can be set according to the user-prescribed parameter $\epsilon$. We will modify these parameter selections in the Corollaries accordingly for clarification.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! I have increased my score. | Summary: This paper proposes projection-free optimization algorithms for constrained nonconvex-(strongly) concave saddle point problem.
Solution concept: $\epsilon$-stationarity
To this end, they propose R-PDCG and CG-RPGA algorithms.
1. Without projection, iteration complexity for R-PDCG is $O(\epsilon^{-6})$ for nonconvex-concave and $O(\epsilon^{-4})$ nonconvex- strongly concave objective.
2. With projection for the maximization step, iteration complexity for CG-RPGA is $O(\epsilon^{-4})$ for nonconvex-concave and $O(\epsilon^{-2})$ nonconvex- strongly concave objective.
They illustrate the results through experiments.
Strengths: 1. The problem is well-motivated and the applications are clear.
2. This is the first projection-free method for constrained nonconvex-concave saddle point problem.
Weaknesses: I am judging the paper by it's theoretical contribution because the applications are supportive of the theory but that's not the main message.
The main weakness of the work is the novelty of the theoretical tools used in the proofs. **Lemma 4.1 + Lemma B.1 are the keys to the proofs.
The proof techniques, specifically (12)-(14) (rest of the proof follows by assumptions) are standard steps in any FW-based method for smooth functions (see [14] for example).**
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors:
1. Could you comment on the optimality of the bounds, i.e., how tight are the bounds?
2. Could you highlight the novelties required for the proof beyond Lemma 4.1, and Lemma B.1?
3. In the experiments, the algorithm does converge to a stationary point, but does stationarity guarantee a good solution for these applications?
4. It seems like if the function is smooth nonconvex-nonconcave, the algorithm should guarantee stationarity. If yes, you should highlight that in the paper. If not, could you explain why? In other words, how important is the convexity?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: See "Weakness" section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1 Could you comment on the optimality of the bounds, i.e., how tight are the bounds?**
**A1**
Thank you for raising this question. We would like to remark that the lower bound complexity for finding an $\epsilon$-stationary of the problem (1) in nonconvex-concave setting is not known yet. However, we comment that the complexity of our proposed algorithm CG-RPGA matches with the state-of-the-art single-loop algorithms (see [13]). Moreover, considering the nonconvex-strongly concave setting, our proposed methods match the lower bound complexity of $\mathcal O(\epsilon^{-2})$ (see [R1]).
[R1] Li H, Tian Y, Zhang J, Jadbabaie A. Complexity lower bounds for nonconvex-strongly-concave min-max optimization. Advances in Neural Information Processing Systems. 2021 Dec 6;34:1792-804.
---
**Q2 Could you highlight the novelties required for the proof beyond Lemma 4.1, and Lemma B.1? The proof techniques, specifically (12)-(14) (rest of the proof follows by assumptions) are standard steps in any FW-based method for smooth functions (see [14] for example).**
**A2** That's a great question.
The novelty of our methods lies in addressing the maximization component of the problem as a parametric optimization problem: $\max_{y\in Y} \mathcal L(x,y)$. Our convergence analysis goes beyond established methods, showcasing new and valuable insights. Notably, our analysis introduces new intermediary steps, exemplified by Lemma 4.1. Within this lemma, we illustrate how the suboptimality of the regularized objective function $\mathcal L_\mu(x_k,\cdot)$ can be diminished within the bound of the error term $\mathcal E(\tau)$. These new steps lead to new results that appear for the first time in the literature (see Theorem 4.2 and 5.1). Although some of the analysis steps resemble the standard analysis of FW-type methods, we present unique inequalities. An illustration of this is equation (17), which emerges as a consequence of proving the technical Lemma B.1. This enriches the scope of our findings and distinguishes our work in this domain.
---
**Q3 In the experiments, the algorithm does converge to a stationary point, but does stationarity guarantee a good solution for these applications?**
**A3**
Thank you for sharing your feedback. In our paper, all of our proposed methods achieve an $\epsilon$-game stationary gap. The relation between $\epsilon$-game stationary and other $\epsilon$-stationary points of saddle point problems have been studied extensively in [R2]. For instance, in the Dictionary Learning example considered in our paper an $\epsilon$-game stationary solution leads to an $\epsilon$-infeasibility and reduction of the objective loss function. This observation aligns perfectly with our primary goal of Dictionary Learning.
[R2] Li J, Zhu L, So AM. Nonsmooth Composite Nonconvex-Concave Minimax Optimization. arXiv preprint arXiv:2209.10825. 2022 Sep 22.
---
**Q4 It seems like if the function is smooth nonconvex-nonconcave, the algorithm should guarantee stationarity. If yes, you should highlight that in the paper. If not, could you explain why? In other words, how important is the convexity?**
**A4**
We are a bit puzzled by the reviewer's question. It would be great if the reviewer can provide more details about the question. The definition of the stationary solution is presented in Definition 2.3 of the paper and we provide a gap function in Definitions 2.1 and 2.2 to measure an $\epsilon$-stationary solution of the saddle problem. Due to the lack of convexity assumption, our only hope is to guarantee a stationary solution which is stated in the results of the paper (see Corollary 4.3 and 5.2).
---
Rebuttal 2:
Title: Thanks for the response
Comment: I am happy with the response and keep my score. | Summary: This paper proposed two projection-free algorithms for solving smooth nonconvex- (strongly) concave saddle point problems. The authors showed that the convergence rates of the proposed algorithms matches the state-of-the-art convergence rate of projection-based methods. Experimental results on dictionary learning verify that the proposed algorithms show great advantage in terms of training time compared to existing projection-based methods.
Strengths: The motivation and contribution of this paper are clear. Projections in algorithms could be problematic in practice and potentially slow down the training. This work fills this gap in nonconvex-(strongly) concave saddle point problems.
Weaknesses: I do not see any major weakness, but I do have some questions. Please see the Questions section.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In the related work section, I would recommend the authors to discuss the works on projection-free methods for solving bilevel optimization. Since saddle point problem can be viewed as a special case of bilevel problem, methods proposed for solving bilevel problems should be applicable for SP problems.
2. The result in line 247 needs a reference. I believe it is related to Danskin's Lemma.
3. In the experiment part, I noticed that the one-sided projection-free method CG-RPGA has better performance in terms of training time compared with the fully projection-based method R-PDCG. The authors argue that this matches the convergence rates. I'm not sure about this argument, because the convergence rates are in terms of number of iterations and time per-iteration is obviously different for each methods. I think the training time plots are trying to verify that even AGP has the same convergence rate as CG-RPGA, AGP is slower mainly due to the projection oracle. If the authors would like to verify the different convergence rates of CG-RPGA and R-PDCG, a plot of gap function and iteration number is more appropriate.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are well discussed in the last section of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1 It is recommended that the authors discuss the works on projection-free methods for solving bilevel optimization.**
**A1** Thanks for the great suggestion. There is indeed a connection between bilevel optimization and saddle point (SP) problems and we will add the related work on bilevel optimization in the revised manuscript. However, it is important to note that most of the existing methods for solving bilevel optimization problems consider an additional assumption on the lower-level objective function satisfying strong convexity or Polyak {\L}ojasiewicz (PL) condition. These assumptions in the context of SP problems translate into strong concavity or PL condition for $\mathcal L(x,\cdot)$ which cannot handle merely concave setting considered in this paper to the best of our knowledge.
---
**Q2 The result in line 247 needs a reference. I believe it is related to Danskin's Lemma.**
**A2** Thanks for pointing this out. We will add the reference in the revised manuscript.
---
**Q3 In the experiment part, I noticed that the one-sided projection-free method CG-RPGA has better performance in terms of training time compared with the fully projection-based method R-PDCG. The authors argue that this matches the convergence rates. I'm not sure about this argument, because the convergence rates are in terms of number of iterations and time per-iteration is obviously different for each methods. I think the training time plots are trying to verify that even AGP has the same convergence rate as CG-RPGA, AGP is slower mainly due to the projection oracle. If the authors would like to verify the different convergence rates of CG-RPGA and R-PDCG, a plot of gap function and iteration number is more appropriate.**
**A3**
We believe that there is confusion regarding the algorithms plots and their convergence rate which we would like to clarify. Note that R-PDCG has a complexity of $\mathcal O(1/\epsilon^6)$ while CG-RPGA has a complexity of $\mathcal O(1/\epsilon^4)$. From Figure 1 in the paper, it can be observed that CG-RPGA has a faster convergence compared to R-PDCG which matches the complexity results obtained in the paper. Moreover, we believe that the plots in terms of time demonstrate a better picture of comparing these methods. Note that one of the main goals of our paper is to show the advantage of using LMO for certain classes of problems. This is indeed the case when the computational cost of LMO is cheaper than PO which can be observed when comparing the computational cost of these algorithms. Per the reviewer's suggestion, the plots of the algorithms in terms of iteration counters will be added to the paper (see the attached pdf file).
---
Rebuttal Comment 1.1:
Comment: In the plots of the algorithms in terms of iteration counters in Example 2 (Dictionary Learning), I wonder why the fully projection-based algorithm AGP has a slower convergence performance than FW-based algorithms. It seems that AGP has a comparable convergence result to the result of CG-RPGA and is even faster than R-PDCG in theory.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer for the follow-up question. We would like to highlight that in the examples we considered in the paper, the projection onto the constraint set $X$ requires full SVD decomposition, therefore, it leads to a higher computational cost for AGP algorithm. As we fixed the running time of algorithms in all the experiments, AGP will take fewer iterations compared to other methods. Therefore, it is very important to acknowledge the distinct oracles these algorithms employ when comparing their complexity results as stated in Table 1 of our paper. Moreover, it should be noted that the complexity results available for these methods are only upper bounds for the gap functions and the algorithms may have better performances on specific examples. The examples provided in the numerical experiments support the motivation behind the development of projection-free methods for saddle point problems, particularly when an LMO is available.
---
Rebuttal Comment 1.2:
Comment: Thanks the authors for the response and additional plots.
For the Robust Multiclass Classification experiment, due to the few iterations of AGP shown in the iteration plots in Figure 1 and Figure 2, it is still not clear how AGP performs compared with the proposed methods in terms of iterations. Moreover, as Reviewer RF27 mentioned in the comment, in the Dictionary Learning experiment, AGP converges much slower in terms of iterations. This is does not match the theory. I wonder if the authors have any insights on this observation.
For now, I will keep my score unchanged.
---
Reply to Comment 1.2.1:
Comment: This aligns precisely with the core motivation of our paper. As detailed in our response to Reviewer cP1u, projecting onto the nuclear-norm constraint incurs a higher computational cost compared to the corresponding LMO. The reason is that the projection operation requires a full SVD decomposition while LMO requires finding the left and right singular vectors corresponding to the largest singular value of $\nabla_x\mathcal L(x_k,y_k)$ (Please also see our response to reviewer RF27). In the Robust Multiclass Classification example, we observe that the high per-iteration computational cost of the projection operator significantly impacts AGP, with more than three iterations taking over 300 seconds reflecting the benefit of projection-free algorithms for a certain class of problems. Therefore, it is very important to acknowledge the distinct oracles these algorithms employ when comparing their complexity results as stated in Table 1 of our paper. For problems with easy-to-project constraints, projection-based algorithms such as AGP may have a better performance, however,
the examples provided in the numerical experiments support the motivation behind the development of projection-free methods for saddle point problems with hard-to-project constraints, particularly when an LMO is available. For the final version of the paper, we will run AGP for additional iterations to enhance the clarity of its performance.
We appreciate the reviewer's question and would be more than happy to address any other concerns they may have. | Rebuttal 1:
Rebuttal: In response to the questions from reviewers, we have implemented our proposed algorithms to address the Robust Multiclass Classification example and have compared the results with those of competitive schemes. Moreover, in response to the reviewer WRhg, for the Dictionary Learning problem, the plot of the algorithms in terms of iteration counters is added.
Pdf: /pdf/7e364ff86120dfc5bded5b0f67082e8a192a9312.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes two Frank-Wolfe (FW) based algorithms for solving a class of nonconvex-(strongly) concave saddle point problems. The proposed algorithms are among the first projection-free methods with convergence guarantees for such problems as authors claimed. The paper uses regularization and nested approximation techniques to deal with the nonsmooth component, and apply it to the primal-dual scheme. Shortly, it approximates the nonsmooth function $f(x)$ by $\mu$ in convex-concave setting. If the objective function is strongly concave in $y$, the regularization can be avoided by setting $\mu=0$. In terms of novelty, the techniques used in this paper are common, and the analysis seems quite classic to me. However, this simple combination still brings interesting results. I think this is a very good and well written paper, and it has made a great contribution. Therefore, I suggest accepting this paper, but I may change my perspective based on other comments.
Strengths: Originality. This paper is a good combination of the regularization technique and Frank-Wolfe method. This allows to obtain a single-loop projection-free method with cheaper computational cost for the nonconvex-concave problem.
Quality. As far as I see, the proofs are correct. Experiments show the advantage of the projection-free methods. It would be better to add another simulation example for situations where projecting on constraints $X$ and $Y$ are difficult.
Clarity. The paper is easy to understand and the results are clearly stated and well-organized. I would like to suggest the authors to double check language, symbols, and definitions. For example, "problem 1" should be changed to "problem (1)", and $\mathcal{G}_X(\bar z)$ should be changed to $\mathcal{G}_X(\bar x,\bar y)$ in Definition 2.1.
Significance. This paper considers a class of nonconvex-concave saddle-point problems, which widely exists in robust optimization, reinforcement learning and adversarial learning. Given existing results, the main contribution of this paper are about solving such problems via projection-free schemes, which reduce the computational complexity in dealing with the problem with structured complicated constraint set, such as nuclear norm ball. The proposed methods can be useful in practice because of its cheaper computational cost and ability to solve the problem with complicated constraint sets.
Weaknesses: The convergence requirement of the fully projection-free method R-PDCG is that the set $Y$ is strongly convex, which is very limited in practical applications. If this assumption can be removed while achieving faster convergence performance (comparable to projection based methods), it would be a better result. In addition, the value of step size $\tau_k$ and the parameter $\mu_k$ is related to the total iteration $K$. If the total number of iterations is large, this will result in a small step size of the algorithms and slow convergence. It would be better to improve the step size $\tau_k$ to a constant that is independent of the total number of iterations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The four theorems proposed in this paper said that "there exists $t\in\{\cdots\}$ such that ... satisfy the following bounds". Does this mean that only a limited amount of iterations satisfies the boundary? Is this measure reasonable and what is its practical significance?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1
Add an example where projecting on constraints are difficult**
**A1**
Thank you for your suggestions. We will add more experiments to the revised manuscript. Specifically, we have implemented our proposed algorithms to address the Robust Multiclass Classification example and have compared the results with those of competitive schemes. (see the attached PDF file)
---
**Q2 The convergence of R-PDCG required the set $Y$ to be strongly convex, which is very limited in practical applications. Can this assumption be removed?**
**A2**
The strong convexity for set $Y$ plays a critical role in our convergence analysis which is closely related to the analysis of FW-type methods.
To gain a deeper understanding of this assumption, it is helpful to examine the convergence results of FW-type methods for solving strongly convex minimization problems. Classical studies on FW-type methods have shown that, unlike projection-based methods, strong convexity of the objective function does not necessarily lead to an accelerated rate (faster than $\mathcal O(1/K)$) [R1]. Achieving a faster rate often requires imposing additional assumptions (see [31]), such as the existence of a solution in the interior of the domain or a uniform lower-bound on the norm of the gradient of the objective function at the solution.
It is important to note that the extension of most of these assumptions to the min-max problems may not be a reasonable assumption since the solution set corresponding to the maximization problem $\mathcal Y^\star(x)=\hbox{argmax}_{y\in Y}\mathcal L(x,y)$ as well as the gradient at the maximizer $\nabla_y\mathcal L(x,y^\star(x))$ changes with respect to $x$. Consequently, our novel analysis using the relatively mild set of assumptions led to convergence results that appeared for the first time in the literature for the considered setting.
Our motivating examples along with Remark 2.8 underscores the significant relevance of our assumption in various machine learning applications. In Section 5 of reference [31], a comprehensive explanation of diverse examples and their applications is provided which further emphasizes the practical implications of our work. We would also like to mention that the applications of strongly convex set even goes beyond machine learning and it has also a subject of interest in optimal control theory [R2].
In our paper, we show two specific applications satisfying the strongly convex set assumption. The Continual Dictionary Learning example in our paper effectively exhibits the application of strong convexity of the set $Y$ as it includes an $\ell_2$-norm ball constraint. For the Robust Multiclass Classification example, the constraint of the maximization is the intersection of simplex set and divergence measure constraints. Indeed, one can relax the simplex constraint using the splitting technique and Fenchel duality. The resulting equivalent saddle point problem has a maximization constraint of $Y=${$y:V(y,\frac{1}{n}\mathbf 1_n)\leq \rho$}.
which is only described by the divergence measure constraint. In some popular examples such as the Pearson Chi-square divergence, i.e., $V(y,\mathbf{1}_n/n)=||ny-\mathbf{1}_n||^2$, $Y$ satisfies the assumption of strongly convex constraint set. We will add a more detailed discussion in this regard to the revised manuscript.
[R1] Lan G, Zhou Y. Conditional gradient sliding for convex optimization. SIAM Journal on Optimization. 2016;26(2):1379-409.
[R2] Veliov VM, Vuong PT. Gradient methods on strongly convex feasible sets and optimal control of affine systems. Applied Mathematics \& Optimization. 2020 Jun;81:1021-54.
---
**Q3 The value of $\tau_k$ and $\mu_k$ is related to the total iteration $K$.**
**A3**
Please note that the step-size $\tau_k$ and parameter $\mu_k$ can be equivalently selected in terms of the user-specified accuracy $\epsilon>0$. Specifically, in Corollary 4.3, we have $\mu=\mathcal O(\epsilon)$ and $\tau=\mathcal O(\epsilon^5)$. In Corollary 5.2, we find $\mu=\mathcal O(\epsilon)$ and $\tau=\mathcal O(\epsilon^3)$. Therefore, these parameters can be set according to the user-prescribed parameter $\epsilon$. We will modify these parameter selections in the Corollaries accordingly for further clarification.
---
**Q4 The theorems said that "there exists such that $t\in ...$ satisfy the following bounds". Is this measure reasonable and what is its practical significance?**
**A4**
Thank you for raising this question. In our convergence analysis, we demonstrated that after performing $K$ iterations, our proposed methods guarantee that at least one of the iterations in $\{(x_k,y_k)\}_{k=1}^{K}$, say $(x_t,y_t)$, satisfies $\epsilon$-gap criterion, i.e., $\mathcal G_Z(x_t,y_t)\leq \epsilon$.
We highlight that this criterion is indeed easy to track during the course of the algorithm. In particular, one can track the values $\mathcal G_Z(x_k,y_k)=\langle \nabla_x \mathcal L(x_k,y_k),x_k-s_k\rangle + \langle \nabla_y \mathcal L(x_k,y_k),p_k-y_k\rangle$ without any additional cost at each iteration until it reaches or falls below the desired accuracy $\epsilon>0$.
---
Rebuttal Comment 1.1:
Comment: It would be better to provide the step sizes of different algorithms used in Simulation. In addition, please state the reason for the choice of $\tau_k$ and $\mu_k$ , which are not discussed in the whole paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for pointing this out.
(i) Due to space limitations, we have relegated the details of our experiment to section F of the Appendix. In the final version of the paper, having one additional page will allow us to relocate this information to the main body of the paper. As it is mentioned in Appendix section F, ``For all the algorithms, the step-sizes are selected as suggested by the papers and scaled to have the best performance. For AGP we let the primal step-size $\frac{1}{\sqrt{k}}$, dual step-size as 0.2, and the dual regularization parameter as $\frac{10^{-1}}{k^{1/4}}$; for SPFW both primal and dual step-sizes are selected to be diminishing as $\frac{2}{k+2}$".
(ii) The selection of the step size $\tau_k$ and parameter $\mu_k$ is discussed in the proof of Corollaries 4.3 and 5.2. These parameters are selected to minimize the upper bound derived in Theorems 4.2 and 5.1, respectively. For instance, in Theorem 4.2, we have provided an explicit upper bound on the primal and dual gap functions. Considering the dominant terms in the aggregation of these two bounds in terms of $\tau,\mu$, and $K$ we observe that $\mathcal{G}_Z(x_t,y_t)\leq \mathcal O(\frac{1}{\tau K}+\frac{\tau^{1/3}}{\mu^{2/3}}+\mu)$. Therefore, selecting $K=\mathcal O(\epsilon^{-6})$, $\tau=\mathcal O(\epsilon^5)$ and $\mu=\mathcal O(\epsilon)$ implies that $\mathcal{G}_Z(x_t,y_t)\leq \epsilon$ after $K=\mathcal O(\epsilon^{-6})$ iterations. | null | null | null | null | null | null |
Unsupervised Anomaly Detection with Rejection | Accept (poster) | Summary: The authors address the topic of rejection of samples in an unsupervised anomaly detection setup. Their approach focuses on determining a constant rejection threshold, which allows the detector to reject examples with high uncertainty. The new proposed method introduces this rejection threshold based on a confidence score given by another existing model (ExCeed). The authors provide theoretical analyses as well as empirical experiments and show that it is possible to set a constant rejection threshold with strong theoretical guarantees.
Strengths: - The paper provides a good structure but also requires some pre-knowledge in this area to be able to follow the presented thoughts e. g. research questions are later connected according to sections.
- In-depth theoretical methodology as well as empirical evaluation.
- Detailed overview of single results in the supplement is given.
Weaknesses: - It was not always straightforward to follow the paper, especially because a lot of variables are introduced but are defined much later in the paper (e.g. t_1(n, \gamma, T) (line 131) and \gamma = \mathds{P}(Y = 1) (line 199)). Starting with a more explanatory part would let the reader build an intuition about which factors are important to calculate the rejection threshold. With all the factors in mind, it will get easier to follow the complex theoretical contribution.
- in line 135 \epsilon is defined as 1 - 2e^(-T). In the formula between lines 123 and 124, the rejection threshold is defined as \mathcal{T} = 1 - \epsilon = 1 - 2e^(-T), which would result in \epsilon = 2e^(-T).
Minor comments:
- "Our approach is called **RejEx** (Rejecting via ExCeed)(line 110)
- In Theorem 3.8 it is refed to Theorem 3.5 for the definition of g. Theorem 3.5, however, refers to Theorem 3.4; It would be easier to directly refer to Theorem 3.4
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: None
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Difficult write-up.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer imnX,
We appreciate your **positive feedback**, and that you eventually were able to **follow our theoretical contribution**. In the revised version, we will address your points about the notation and try to give some **more intuitions prior to the theoretical sections**.
We will fix the typos you correctly pointed out: line 135 should have $2e^{-T}$ instead of $1-2e^{-T}$, we missed *RejEx* in line 110, and Theorem 3.8 should refer to Def. 3.4 instead of Theorem 3.5.
---
Rebuttal Comment 1.1:
Title: rebuttal
Comment: thx. | Summary: This paper presents a rejection scheme for the task of unsupervised anomaly detection. Learning to reject enables a predictor to withhold from making a prediction; this paradigm is more common in unsupervised learning. Here, the authors extend the rejection idea to the unsupervised anomaly detection task. The idea is to reject samples based on a stability metric; namely, of the prediction is unstable to small changes in the feature space, the prediction is rejected. This type of stability metric was recently proposed and is termed EXCEED. The authors present a theoretical analysis of the EXCEED metric and derive upper bounds for the test rejection rate and expected prediction cost. The new scheme is evaluated for several anomaly detectors on real datasets and outperforms other rejection schemes.
Strengths: The paper is well-written, and easy to follow. Overall, the presentation is scientifically sound. The problem of unsupervised anomaly detection is extremely challenging and important; the paper presents a rejection scheme that could improve trust in commonly used detectors. The idea of using stability and specifically the EXCEED metric, makes sense. The theoretical analysis strengthens the work and offers bounds on the expected values of the presented scheme. The empirical evidence presented in the paper is promising and demonstrates the merits of the method.
Weaknesses: Background on the EXCEED method is missing; adding more information on this metric could help the reader. Some recently proposed NN anomaly detectors are missing from the evaluation, for example:
[1] Qiu, Chen, et al. "Neural transformation learning for deep anomaly detection beyond images." International Conference on Machine Learning. PMLR, 2021.
[2] Shenkar, T., & Wolf, L. (2021, October). Anomaly detection for tabular data with internal contrastive learning. In International Conference on Learning Representations.
[3] Lindenbaum, et al. (2021). Probabilistic robust autoencoders for outlier detection. arXiv preprint arXiv:2110.00494.
There are many other NN that could be included, I think some NN baselines should be considered.
The description of the experiments conducted is too brief; it would be good if the authors could expand on the implementation and evaluation protocol. For example, what is $\lambda$ in all experiments? Or how is it tuned?
Sample size is limited in the evaluation to 20K.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The analysis assumes that $\gamma$ is known, in practice how can you gain access to this value? Is there a way to overcome this limitation?
Please expand on the ECEED metric, equation 1.
A comma is missing after this equation.
How is the cost influenced when changing \lambda? It is not clear from the text what do you do with this value.
Why are you not mentioning the subsampling in the main text? Is it so computationally demanding to evaluate the method on large datasets?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are discussed in the last section, I would however add information on the cases in which rejection increases the cost, for example using statistic across datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Xnvu,
Thanks for your **positive and constructive feedback**. Here are our responses:
1. [**Experimental setup**] Section 5.1 clarifies our experimental setup, including **how we set hyperparameters**. If you can let us know specific things that are unclear, we will add them. Also, we have submitted and will release all information (code, data repository, …) to replicate the experiments.
2. [**Setting the decision threshold $\lambda$**] As stated in lines 71-73, the decision threshold $\lambda$ is set such that $\gamma \times n$ scores are $\ge \lambda$, where $\gamma$ is the dataset’s contamination factor and $n$ is the training set size. Because we are operating in a **fully unsupervised** setting, we do not consider $\lambda$ as a hyperparameter to tune [59] but we set it the same way as in other papers [see for example cites 23,49,50,51]. Consequently, we do not analyze how the test cost varies when changing $\lambda$.
3. [**$\gamma$ is known**] We assume that the contamination factor $\gamma$ is given, as stated in line 98. However, approaches exist to **estimate it** from a given dataset. This can even be done from a **fully unlabeled** dataset; see for example cite 50 in the paper.
4. [**References**] We will discuss the references in the final version of the paper and will do our best to **include as many as possible** in the experimental analysis.
5. [**ExCeeD**] We will provide further details about exceed in the final version of the paper. Also, see our response to reviewer 3a9y.
6. [**Computational cost**] Limiting the dataset size to $20K$ by taking a subsample is an experimental detail that has the unique goal of **saving computational effort**. In fact, running all $2040$ experiments with the size limit requires more than a week. Note that, as shown in Q3, our method has **low computational cost**, as opposed to *Stability*, which uses an expensive internal optimization, and *Ens*, which uses an ensemble of models.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I thank the authors for responding to all my comments.
I have no additional open questions about the paper. | Summary: This paper suggests applying the stability metric computed by EXCEED for anomaly detection. The authors present theoretical findings regarding this metric, including the test rejection rate, as well as upper bounds for both the rejection rate and the expected prediction cost. Furthermore, comprehensive experiments are conducted to validate the effectiveness of the proposed method.
Strengths: 1. The presentation of the paper is clear, and the proposed method is simple but effective.
2. This paper offers a theoretical analysis of EXCEED, deriving the upper bounds for both the rejection rate and the expected prediction cost.
3. The effectiveness of the proposed method and the validity of the theoretical results are confirmed through comprehensive experiments.
Weaknesses: 1. The methods compared in Figure 1 appear to be significantly dated. It would be valuable if the paper could include additional results pertaining to recently proposed methods.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please refer to [Weakness].
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Please refer to [Weakness].
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer R9xH,
We appreciate your **positive review**. Our method is **anomaly detector-agnostic**, which means that it can be applied on top of any anomaly detector. We ran the experiments using $12$ anomaly detectors included in the most recent and largest experimental comparison [23], which include:
- Classical algorithms, like LOF and IForest, that are often used as baselines because they obtain competitive performance [65, 66, 67];
- More recent algorithms, like COPOD (2020), ECOD (2022).
Note that cite [23] claims that **“none of the unsupervised methods is statistically better than the others”** and that **“some Deep Learning based unsupervised methods are surprisingly worse than shallow methods”**.
Furthermore, out of $34$ datasets, $15$ datasets have at least one detector with $AUC > 0.90$, and $12$ datasets have at least one detector with $AUC > 0.7$. Thus this seems like a **reasonably extensive benchmark**.
Finally, from the review, it is unclear what other anomaly detection methods the reviewer considers to be state-of-the-art.
----
[65] Qiu Chen, et al. "Neural transformation learning for deep anomaly detection beyond images." International Conference on Machine Learning. PMLR, 2021.
[66] Han, Songqiao, et al. "Adbench: Anomaly detection benchmark." Advances in Neural Information Processing Systems 35 (2022): 32142-32159.
[67] Cai, Jinyu, and Jicong Fan. "Perturbation learning based anomaly detection." Advances in Neural Information Processing Systems 35 (2022).
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks for the rebuttal. My questions have been well addressed. | Summary: - The authors proposed a selective predictor (learning to reject) for fully unsupervised setting in anomaly detection problems given an unsupervised anomaly detector.
- The proposed method is based on the theoretical supports and the threshold can be selected without any labeled data.
- The experimental results show that the proposed method can significantly reduce the cost of selective prediction.
Strengths: - The proposed method is grounded on the theoretical supports that could be beneficial on generalization.
- The experimental sections are extensive and multiple ablation studies show its superiority across various settings.
Weaknesses: - It seems like the proposed method is only applicable when the anomaly ratio is given. In some cases, anomaly ratio itself is not provided.
- It would be great if the authors can provide more extensive experiments when we have some labeled data in comparison to the baselines.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. EXCEED metric
- I can understand the brief concepts of the EXCEED metric.
- However, it would be good to further explain how the training data is perturbed.
- Also, it would be good to add additional explanations of Equation (1) - like the motivations of this equation.
2. Problem settings
- So, here, do we assume that we have an access to the contamination ratio (gamma)?
- If yes, can we extend this method without access to the contamination ratio?
3. With some labels
- As discussed in Related works, if we have some labels, we can easily optimize the rejection function using two ways that the authors explained.
- In that case, can we analyze how many samples do we need to have similar performance with the proposed unsupervised method?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are clearly stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 3a9y,
Thanks for the **very specific and helpful feedback**. We will address all your comments in the final version of the paper. Here is our response to your questions:
1. ExCeeD uses a Bayesian formulation that simulates **bootstrapping** the training set as a form of perturbation. Equation 1 computes the confidence for a test score $s$ in two parts. **First**, it computes $\psi_n$ as the proportion of training scores lower than $s$. **Second**, it quantifies the probability that the model predicts anomaly for $s$ by estimating the proportion of times that $\psi_n > 1-\gamma$ (i.e., $s$ is in the top $\gamma$% of training scores) when simulating the bootstrapping of the training set.
2. Yes, we assume that the **contamination ratio is given**, as stated in line 98. However, approaches exist to estimate it from a given dataset. This can even be done from a fully unlabeled dataset; see, for example, cite 50 in the paper.
3. Experiment Q5 (lines 327-333) shows that our approach with a fully labeled training set would **only reduce the test cost by $0.6$%**, on average. We agree that analyzing on a theoretical level how many samples are needed to obtain a performance that is similar to the unsupervised one is interesting. This is certainly a good direction for **future work**, especially for unsupervised settings, because it would shed light on the number of labels needed to **justify the improvement**.
---
Rebuttal Comment 1.1:
Title: Thanks for the detailed response to my questions
Comment: Fully unsupervised settings (even without contamination ratio) would be the nice future work (or extensions).
Also, what I asked for Question (3) is more like an experimental analysis. Like how many samples do we need to collect instead of using the proposed method. If we just need less than 10 samples, some practitioners will just gather them instead of using this method.
Anyway, I think this paper is an interesting paper and I will stand on my original score. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes an approach to perform learning to reject for anomaly detection in a completely unsupervised manner. The authors make three major contributions: (1) a novel theoretical analysis of a stability metric for anomaly detection, (2) a mechanism for designing an ambiguity rejection mechanism without any labeled data that offers strong guarantees, and (3) an evaluation of the proposed approach on an extensive set of unsupervised detectors and benchmark datasets. The authors show that their method outperforms several adapted baselines based on other unsupervised metrics and that their theoretical results hold in practice.
Strengths: Originality: The paper proposes a novel approach to perform ambiguity rejection for anomaly detection in a completely unsupervised manner. The authors provide a novel theoretical analysis of a stability metric for anomaly detection and show that it has several previously unknown properties that are of great importance in the context of learning to reject.
Quality: The paper provides a thorough theoretical analysis of the proposed approach and demonstrates its effectiveness through experiments on an extensive set of unsupervised detectors and benchmark datasets. The authors also provide strong guarantees for their proposed method.
Clarity: The paper is well-written and easy to follow. The authors provide clear explanations of the proposed approach and the theoretical analysis.
Significance: The proposed approach addresses the challenge of uncertainty in traditional anomaly detectors and provides a solution through Learning to Reject. The authors show that their method outperforms several adapted baselines based on other unsupervised metrics and that their theoretical results hold in practice. The proposed approach has significant implications for anomaly detection in various domains.
Weaknesses: The author may provide more intuition on how EXCEED works to estimate the stability.
The paper could benefit from a more detailed discussion of the limitations of the proposed approach and potential directions for future research.
While the paper provides a thorough theoretical analysis of the proposed approach, it could benefit from more detailed explanations of the experimental setup and results. Specifically, the paper could provide more information on the hyperparameters used in the experiments and how they were selected, as well as more detailed comparisons with other state-of-the-art methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is there any other method to estimate the detector's stability, in addition to EXCEED? If so, how does EXCEED outperform other methods?
2. In Definition 3.7, the cost function is defined as a simple addition. What if the cost function is of a more complex form?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 6n88,
Thanks for the **constructive feedback**. We will include a **more thorough overview** of how ExCeeD works in the final version of the paper to improve the readability of the paper. Lines 268 - 273 state that all the **hyperparameters** are set to their **default value** because we are operating in an unsupervised setting, meaning we **do not have labels** to tune them. Finally, here are our responses to your main concerns:
1. [**Measuring Stability**] Apart from [47] which is included as a baseline in the paper, we are not aware of other methods that quantify a detector’s stability;
2. [**Cost function**] Setting a proper cost function requires domain knowledge. In the learning to reject literature, most works use an additive cost function (see, e.g., the survey in [25]). Exploring other cost functions is a relevant area and if you are aware of other cost functions used in the literature, please let us know and we will include it in the final version of the paper.
3. [**State-of-the-art rejection methods**] We would be happy to know what other methods the reviewer is referring to. To the best of our knowledge, there are no other algorithms for setting a rejection threshold without labels.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal! I have read it and will keep my recommendation. | null | null | null | null | null | null |
Contrastive Modules with Temporal Attention for Multi-Task Reinforcement Learning | Accept (poster) | Summary: The paper studies modular multi-task reinforcement learning to address negative transfer problem. The proposed method has two components: 1. contrastive learning on module outputs, to encourage model expressiveness and generalization. 2. use temporal information to combine module outputs, to address negative transfer within tasks. Experiments in Meta-World shows that the proposed method outperforms other multi-task baselines and learning tasks individually.
Strengths: 1. The motivation for modular learning with temporal attention to address negative transfer is clear.
2. The paper is very well-written, with relevant references and easy-to-follow narration.
3. The method is the only one outperforming single-task RL in Meta-World.
Weaknesses: 1. The experiments don't reflect the claim that the method improves generalization.
2. Some previous works (e.g., Multi-Task Reinforcement Learning with Soft Modularization) also select different modules in different time steps in a task.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why are the results in Figure 3 quite different to the results in "Soft Modularization". In that paper, Soft Modularization outperforms multi-task variants of SAC a lot.
2. Since most multi-task methods cannot outperform the individual-learning baseline, what are the benefits of multi-task RL? Can the learned multi-task model adapt to unseen tasks quicker?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your valuable feedback and would like to thank you for your time and effort in reviewing our manuscript. Any further discussion will be appreciated.
> W1: The experiments don't reflect the claim that the method improves generalization.
We believe that the following changes in settings have contributed to increased task complexity and diversity:
1. The number of tasks increasing from 10 (MT10) to 50 (MT50).
2. Transitioning from fixed positions to varying positions (mixed).
In both of these scenarios, our method exhibits greater advantages over other baselines (as indicated in lines 259-262). This to a certain extent underscores the superior generalization ability of our approach.
> W2: Some previous works (e.g., Multi-Task Reinforcement Learning with Soft Modularization) also select different modules in different time steps in a task.
In comparison to SoftModu, our method differs not only in the utilization of LSTM but also in the specifics of attention computation and their application. For instance, while we concatenate temporal information with task information to calculate attention, SoftModu employs a dot product between state information and task information, involving multiple layers of attention weight computation.
> Q1: Why are the results in Figure 3 quite different to the results in "Soft Modularization". In that paper, Soft Modularization outperforms multi-task variants of SAC a lot.
We mentioned this point in Section 5.4 Lines 277-283. The original SoftModu paper employed a loss weighting trick, which we deliberately omitted in our experiments to ensure fairness in comparison. Additionally, Figure 6(b) in the SoftModu paper (Yang et al., 2020) displays the performance without utilizing this trick, and it aligns quite closely with our reproduction results.
> Q2.1: Since most multi-task methods cannot outperform the individual-learning baseline, what are the benefits of multi-task RL?
**Better Sample Efficiency:** This advantage can be interpreted as each task benefiting from additional samples generated by auxiliary tasks. Consequently, during the initial training stages, the performance improvement of multi-task rl algorithms outpace that of Single-SAC.
**Model Capacity Compression:** By employing a single network to address all tasks instead of using n separate networks.
> Q2.2: Can the learned multi-task model adapt to unseen tasks quicker?
This outcome hinges on whether the multi-task model merely memorizes multiple tasks or is capable of extracting commonalities among tasks.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response! | Summary: This work proposes to enhance the expressiveness and generalization capability of the modular methods in multi-task reinforcement learning by applying contrastive loss over different task modules and encode the task related information with a temporal attention module. This work shows by applying both techniques, the proposed CMTA method can learn modules producing different learned embeddings and outperform baselines in different benchmarks.
Strengths: * Different modules of the method are well-motivated, aligned with intuition and described with detail. Details provided for the community to reproduce the results. Using contrastive learning to enforce different module learn different skills is reasonable.
* The visualization of learned encoding of different modules shows the contrastive learning term encourages the different modules to learn skills with different semantic meanings.
* According to the experiment results, the proposed method CMTA outperformed different baselines by a large margin in different settings.
* The overall writing of the work is easy to follow.
Weaknesses: 1. Some experiment results are not well-explained:
* According to the Sec 5, the Mixed version of MetaWorld benchmark is supposed to be more difficult than the fixed version. Why the Single-SAC is performing worse in the Fixed MT10, and it seems the Single-SAC results (in Fixed setting) is much worse than the reported in previous works.
* All baselines perform worse in Mixed (compared with in Fixed), while the proposed method works better in Mixed Version.
* Why the performance of all methods significantly drops after a certain stage for MT50-Mixed (similar phenomenon did not appear in MT10-Mixed).
2. Some components of the methods could be better ablated like, how does the number of experts affect the performance of the method. Since this work propose to learn more meaningful skills for different experts via contrastive learning, more discussion on this part would make the work stronger.
3. Visualization of the attention weight for different tasks is missing, which could help the audience understand the proposed method.
4. The use of different skill-use is discussed in previous work (Soft Modularization, where the state information and the task information are used at the same time to output the module selection), and no specific training objective (the difference with previous here seems to be the LSTM ) in this work is addressing this issue.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Some explanation regarding the experiment results mentioned in the weakness section would be appreciated.
* Additional visualization regarding the attention weights for different tasks. And since different modules have different semantic meaning (powered by the contrastive learning), some investigation on what kind of skill a specific expert represents would be interesting as well.
* Though this work claims the method works without any loss weighting trick in the optimization, it would be interesting to see some results from that end,
I would raise my rating if the authors could reasonably address part (given the limited time for rebuttal) or full of my concerns.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: This work propose a general multi-task RL method, in this case, I think no specific potential negative societal impact or similar things should be addressed. As the author indicated, the current method works well in multi-task RL (with a fixed number of tasks), and would like to see some extension in meta learning or open-vocabulary settings in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We extend our sincere appreciation to the reviewer for their valuable insights and constructive feedback. Any further discussion will be appreciated.
> W1.1.1: According to the Sec 5, the Mixed version of MetaWorld benchmark is supposed to be more difficult than the fixed version. Why the Single-SAC is performing worse in the Fixed MT10 ?
Mixed version has varying positions. In theory, the agent is more likely to deduce patterns or objectives of tasks (e.g., moving an object to a goal) rather than merely memorizing fixed positions. Thus it is possible for Single-SAC performing better in Miexed MT10.
> W1.1.2: it seems the Single-SAC results (in Fixed setting) is much worse than the reported in previous works.
In the CARE paper, the performance of Single-SAC reached 90%. However, in our reproduction, we could only achieve this level of performance with a single seed. We raised an issue on GitHub regarding our inability to reproduce the Single-SAC results, but the authors have not responded. In the SoftModu paper, after averaging over 3 seeds, the performance of Single-SAC was reported as 78.5%. Considering the difference in seed numbers, this result closely aligns with our experimental findings.
> W1.2: All baselines perform worse in Mixed (compared with in Fixed), while the proposed method works better in Mixed Version.
This might be indicative of our method's superior generalization ability, enabling it to capture invariant patterns across different positions and thus leading to improved performance. In contrast, other methods in the mixed environment could potentially shift from memorizing a single set of positions (as in Fixed) to remembering 50 sets of positions. This shift could contribute to a decline in performance due to increased complexity and task variation.
> W1.3: Why the performance of all methods significantly drops after a certain stage for MT50-Mixed (similar phenomenon did not appear in MT10-Mixed).
All algorithms exhibit this phenomenon in the MT50-Mixed environment, and we believe this is an inherent issue with the mixed environment itself. Most of the baselines experience performance degradation around 0.5 million steps, whereas our method's performance decline begins at 1.5 million steps, highlighting its robustness. We speculate that the reason for this performance drop lies in the more diverse nature of the MT50-Mixed environment. Overtraining can lead to model overfitting, causing it to prioritize simpler tasks over more challenging ones. Our observations of individual task success rates support this, in MT10-mixed, among the 10 tasks, only one task experiences a decline in performance during the later stages(Appendix C, Figure 6, push-v1). However, in MT50-mixed, among the same set of 10 tasks, 5 tasks demonstrate performance drops, and these declines occur earlier in the training process compared to MT10-mixed. Given that our x-axis represents steps for each task, the training data volume for MT50 is five times that of MT10, which can lead to quicker overtraining issues.
> W2: Some components of the methods could be better ablated like, how does the number of experts affect the performance of the method.
The ablation of experts number can be seen in the pdf of global response. The experimental results indicate that having fewer experts leads to performance degradation. However, increasing the number of experts beyond a certain point does not yield positive effects.
> W3: Visualization of the attention weight for different tasks is missing, which could help the audience understand the proposed method.
See the t-sne visualization of attention weights for different tasks in the pdf of global response. It is obvious that the attention weights of different tasks are clustered into different clusters, which indicates that CMTA will choose different module combinations for different tasks.
> W4: The use of different skill-use is discussed in previous work (Soft Modularization, where the state information and the task information are used at the same time to output the module selection), and no specific training objective (the difference with previous here seems to be the LSTM ) in this work is addressing this issue.
In comparison to SoftModu, our method differs not only in the utilization of LSTM but also in the specifics of attention computation and their application. For instance, while we concatenate temporal information with task information to calculate attention, SoftModu employs a dot product between state information and task information, involving multiple layers of attention weight computation.
> Q1: Some explanation regarding the experiment results mentioned in the weakness section would be appreciated.
See the answer of W1.
> Q2: Additional visualization regarding the attention weights for different tasks.
See the answer of W3.
> Q3: Though this work claims the method works without any loss weighting trick in the optimization, it would be interesting to see some results from that end.
We add a relatively simple loss weighting trick in our method: let the task weightings $\lambda_i$ (see euqation 1 in our paper) is propotional to exp(1/($su_i + \delta$)), where $su_i$ is the current success rate of task i. The evaluation perfomance(average on 8 seeds) after training 1 million steps is:
| | smoothed SR on MT10-Mixed |
| -------------------------- | ------------------------- |
| CMTA | 78.5 |
| CMTA+ naive loss weighting | 73.6 |
Interestingly, the introduction of this trick resulted in a decrease in the performance of our method. This might suggest that the naive loss weighting we attempted may not be effective as initially thought. Additionally, there exist numerous studies on loss weighting in MTRL, all of which can potentially be integrated with our method or the baselines we've utilized.
---
Rebuttal Comment 1.1:
Comment: The response addressed most of my concerns. I'm raising my score to weak accept | Summary: The paper introduces an approach for multi-task RL. Their approach is similar to CARE, which learns separate encoder modules, but they add a contrastive task loss on top of the encoders. They show this approach outperforms all reported baselines on Meta-World (MT-10 and MT-50) and a variant of Meta-World where the initializations are mixed throughout training.
Strengths: - The proposed approach outperforms all reported baselines and independent training on the metaworld tasks in the fully observed setting.
- The proposed solution is straightforward to implement.
Weaknesses: - A lot of the methods section should be moved to a preliminary section because it is difficult to understand what is novel and not novel. The temporal attention section on L168-180 is one example. Overall I found the organization of the paper confusing.
- There are not enough experimental results to fully validate the method (e.g., relevant baselines like PC-grad). In all of the comparisons to baselines the approach is still fairly overlapping with the error bars of other baselines and the meaning of the error bars is not described anywhere in the text.
- It would help the reader to consolidate the terms. The modules are separately referred to as both experts and modules. I would find it more straightforward if the naming was consistent.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: - On L143 "Notably, it surpasses the performance of learning each task individually for the first time in the Meta-World environment" Is this true in the case of MT10? E.g., Figure 3 of Gradient Surgery for Multi-Task Learning (Yu et. al., 2020).
- What are the error bars in the figures?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We wish to thank the reviewer for their thorough review and valuable recommendations that have strengthened our paper. Any further discussion will be appreciated.
> W1: A lot of the methods section should be moved to a preliminary section because it is difficult to understand what is novel and not novel. The temporal attention section on L168-180 is one example. Overall I found the organization of the paper confusing.
Both subsections 4.1 and 4.2 in our method section is novel and thus doesn't need to be placed in the preliminary section. Our innovation lies in two aspects: contrastive learning and temporal attention. The section on temporal attention (L168-180) is indeed one of our contributions, and it diverges from the previous approach of CARE. While CARE employs task information for attention, our attention module incorporates temporal information as well. It's possible that the reviewer overlooked this distinction, leading to confusion.
> W2.1: There are not enough experimental results to fully validate the method (e.g., relevant baselines like PC-grad).
The experimental results of PC-grad can be compared using Table 1 and Table 3 in the CARE paper (Sodhani et al., 2021). From these tables, it becomes evident that CARE outperforms PC-grad and is indeed a stronger baseline:
| | MT10 | MT50 |
| ------ | ----------------- | ---------------- |
| PCGrad | 0.72 $\pm$ 0.022 | 0.5 $\pm$ 0.017 |
| CARE | 0.84 $\pm$ 0.051 | 0.54 $\pm$ 0.031 |
The landscape of existing multi-task RL methods spans multiple orthogonal directions, encompassing architecture design, gradient modulation (including PCgrad), and loss weighting. Methods from different directions can indeed be combined, such as PCgrad + our method, PCgrad + CARE, and so on. However, this fusion is not the primary focus of our study. Consequently, we haven't included PCgrad in our baseline comparisons, as the baselines we have chosen primarily reside in the architecture design direction.
> W2.2: In all of the comparisons to baselines the approach is still fairly overlapping with the error bars of other baselines and the meaning of the error bars is not described anywhere in the text.
Not mentioning the significance of error bars in the paper was indeed an oversight on our part, and we appreciate the reviewer's observation. The error bars (shaded areas) represent the standard deviation across 8 different seeds. In RL experiments, results can be highly influenced by the choice of seeds, making it necessary to average the outcomes across multiple seeds to mitigate the impact of randomness-induced errors. The overlapping error bars among different baselines merely indicate the substantial effect of RL's randomness in MT10. As a result, our primary mode of comparison relies on the mean performance of different algorithms. Furthermore, in the MT50, there is no overlap among different baselines. This distinct separation clearly highlights the superiority of our approach.
> W3: It would help the reader to consolidate the terms. The modules are separately referred to as both experts and modules. I would find it more straightforward if the naming was consistent.
That's a great suggestion, and we will certainly rephrase the relevant sections in the forthcoming revised version to minimize reader confusion. We appreciate the reviewer's valuable advice.
> Q1: On L143 "Notably, it surpasses the performance of learning each task individually for the first time in the Meta-World environment" Is this true in the case of MT10? E.g., Figure 3 of Gradient Surgery for Multi-Task Learning (Yu et. al., 2020).
In Figure 3 of the PC-grad paper (Yu et al., 2020), PC-grad appears to achieve performance close to that of single task training. However, the paper does not explicitly mention averaging results across multiple seeds in the experiments. Hence, we speculate that PC-grad might have utilized only a single seed for experiments on Meta-World. The substantial randomness inherent in reinforcement learning, as discussed in W2.2, can weaken the persuasiveness of results from single seed. Based on our experimental findings, it is evident that only our approach outperforms single task training in both MT10-fixed and MT10-mixed.
> Q2: What are the error bars in the figures?
See the answer of W2.2. | Summary: This paper proposes an approach to multi-task RL called Contrastive Modules with Temporal Attention that aims to address the issue of negative transfer between tasks in multi-task RL.
The proposed method consists of two main components: contrastive learning and temporal attention. The contrastive learning component is used to ensure that the shared modules learned by the method are distinct from each other. This is achieved by applying a contrastive loss that encourages the modules to produce different outputs for the same input. The temporal attention component is used to dynamically combine the outputs of the different modules at each time step. This allows the method to adapt to the specific requirements of each task.
The authors evaluate their method on the Meta-World benchmark, a widely used benchmark for multi-task RL. They compare the performance of their method with several baselines, including methods that train each task separately and methods that share all modules across tasks. The results show that CMTA outperforms the baselines in terms of both sample efficiency and performance.
Strengths: Originality
- The paper presents a novel approach to multi-task RL by introducing contrastive modules with temporal attention.
- The method addresses the issue of negative transfer between tasks, which is a significant challenge in multi-task RL. The authors propose a novel solution to this problem by constraining the modules to be different from each other and using temporal attention to dynamically combine them.
Quality
- The authors provide adequate experimental results on Meta-World, a widely-accepted continuous control robotics benchmark, and some ablation studies that support the effectiveness of their method.
- The paper is well-referenced, indicating a thorough understanding of the existing literature. The authors clearly position their work within the context of previous research.
Clarity:
- The paper is well-organized and the writing is clear. The authors provide a clear explanation of their method and its advantages.
- The figures and tables in the paper are informative and support the text well. They help to clarify the method and the experimental results.
Significance:
- The proposed method addresses a critical challenge in the field and shows superior performance compared to existing methods.
- The method proposed by the authors, in particular the temporal attention mechanism, has the potential to be widely adopted in the field of multi-task RL. It could also inspire future research in this area.
Weaknesses: Limited insight into hyperparameter sensitivity: The paper would be stronger if it discussed the sensitivity of the proposed method to its hyperparameters. Understanding how changes in hyperparameters affect the performance of the model is crucial for reproducibility and for users who wish to apply the method to their own tasks.
Lacking insight into soft attention weights: It would be interesting to see the general relationships of soft attention weights and other aspects of the problem, such as its temporal nature, the tasks involved, etc.
Limited Discussion on Failure Cases: While the paper presents a number of successful results, it could discuss in detail the scenarios where the proposed method fails or performs sub-optimally. Such a discussion could provide valuable insights into the limitations of the method and guide future improvements.
Lack of Comparison with Related Work: While the paper compares the proposed method with several baselines, it does not compare it with other multiple other methods that also use contrastive learning or attention mechanisms in the context of multi-task RL (*see first bullet point below). Such comparisons could provide a more comprehensive evaluation of the proposed method.
- However, the paper does add the contrastive loss to CARE and evaluate it, but CARE's attention mechanism is very different than CMTA's and only CARE was compared against in this way.
- It would be interesting to see how other proposed baselines or competitive related work would benefit from the proposed temporal attention module.
Generalizability: The proposed method has been evaluated on a specific benchmark (Meta-World). Its performance on other benchmarks or real-world tasks that differ from Meta-World type setup and tasks is not known. CMTA may not outperform certain baselines or other algorithms on suites of tasks in other benchmarks or real-world tasks.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Subsumed into Weaknesses and Limitations.
1. What are three limitations of your work that have not been addressed in the reviews, and what are your thoughts about them? This isn't intended to diminish your work. Instead it's to show that you understand where and how your work shines and to highlight where it may not as future problems to be addressed or as problems that are insignificant for some reason(s).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Scalability: The proposed method might not scale well to tasks with a larger number of subtasks or more complex environments, even though the paper claims in Line 311-312 that "as the variety of tasks increases, the advantages of our approach become more apparent", they don't show this outside of Meta-World, and Meta-World is measure 0 subset of possible tasks.
Regardless, the computational cost of the method, especially the contrastive learning part, could increase significantly with the complexity of the tasks.
Dependence on Task Similarity: The effectiveness of the proposed method might depend on the similarity of the tasks. If the tasks are very different from each other, the shared modules learned by the method might not be effective for all tasks.
I did not find the paper addressing its limitations anywhere. One I will put forth is the lack of insight into the temporal attention module's relationship to different aspects of the problem, such as (1) how the attention weights vary over time, (2) whether they reach a steady state, (3) what do they focus on, for the Meta-World problems covered in the paper.
I believe we should be wary of how this overall approach behaves, especially the temporal attention module and the assigned soft attention weights, in problems in which ethics, safety, fairness, bias, may be a concern. This is general concern for any algorithm that doesn't explicitly address and mitigate these issues, but it's especially one here because of the lack of insight into the temporal attention module's relationship to aspects of the problem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their thoughtful suggestions and comments that have greatly improved our manuscript. Any further discussion will be appreciated.
> W1: Limited insight into hyperparameter sensitivity
The ablation of experts number can be seen in the pdf of global response. The experimental results indicate that having fewer experts leads to performance degradation. However, increasing the number of experts beyond a certain point does not yield positive effects.
> W2: Lacking insight into soft attention weights: It would be interesting to see the general relationships of soft attention weights and other aspects of the problem, such as its temporal nature, the tasks involved, etc.
- temporal nature: see the answer of L4
- tasks involved:see the t-sne visualization of attention weights for different tasks in the pdf of global response. It is obvious that the attention weights of different tasks are clustered into different clusters, which indicates that CMTA will choose different module combinations for different tasks.
> W3: Limited Discussion on Failure Cases
Some discussion on failure cases including:
- According to the Sec 5, the Mixed version of MetaWorld benchmark is supposed to be more difficult than the fixed version. Why the Single-SAC is performing worse in the Fixed MT10, and it seems the Single-SAC results (in Fixed setting) is much worse than the reported in previous works.
- All baselines perform worse in Mixed (compared with in Fixed), while the proposed method works better in Mixed Version.
- Why the performance of all methods significantly drops after a certain stage for MT50-Mixed (similar phenomenon did not appear in MT10-Mixed).
Due to character limitations, the answers can be found in reviewer fVjH's W1.
> W4: Lack of Comparison with Related Work
Currently, there are no other MTRL algorithms utilizing contrastive learning. Additionally, the employment of attention mechanisms is limited to the CARE and SoftModu. Our use of contrastive learning and temporal attention is built upon a mixed-of-experts (MOE) design, which restricts their applicability to methods utilizing MOE (specifically, only CARE employs MOE). To validate the temporal attention module on other baselines like MTSAC, it would indeed necessitate the introduction of an MOE. However, this modification would alter the original structure (e.g., SAC + temporal attention (+MOE) = CMTA w/o CL). In essence, this corresponds to a portion of our ablation experiments.
> W5: Generalizability: The proposed method has been evaluated on a specific benchmark (Meta-World). Its performance on other benchmarks or real-world tasks that differ from Meta-World type setup and tasks is not known. CMTA may not outperform certain baselines or other algorithms on suites of tasks in other benchmarks or real-world tasks.
MetaWorld provides a diverse task distribution with 50 different tasks involving objects like doors, cups, windows, drawers, etc. and skills like push, pull, open, close, etc. Evaluating on a broad task distribution(Metaworld) provides a good estimate of the generalization capabilities of mtrl algorithms .
Currently, MetaWorld stands as the sole widely recognized benchmark in mtrl, and previous mtrl works (CARE(Sodhani et al., 2021), SoftModu(Yang et al., 2020)) also only evaluate their methods on MetaWorld. Therefore, we need to first construct benchmarks before implementing the algorithm. So, we will leave it as future work.
> L1: Scalability
As mentioned in W5, MetaWorld stands as the sole widely recognized benchmark in mtrl and provides a good estimate of the generalization capabilities of mtrl algorithms. In Meta-World, we believe that the following changes in settings have contributed to increased task complexity and diversity:
- The number of tasks increasing from 10 (MT10) to 50 (MT50).
- Transitioning from fixed positions to varying positions (mixed).
In both of these scenarios, our method exhibits greater advantages over other baselines (as indicated in lines 259-262). This outcome to some extent substantiates the claim that "as the variety of tasks increases, the advantages of our approach become more apparent."
> L2: Regardless, the computational cost of the method, especially the contrastive learning part, could increase significantly with the complexity of the tasks.
The computational cost of the contrastive learning part is proportional to $O(n^2)$ , where n is number of experts. If task similarity is substantial, then even with an increase in the number of tasks, computational cost won't necessarily escalate as long as the number of experts remains constant. For instance, in the case of MT50, we employed the same six experts as in MT10.
> L3: Dependence on Task Similarity
Task similarity is indeed a fundamental aspect of multi-task learning. Even humans struggle to extract mutually beneficial information from entirely unrelated tasks. For instance, attempting to learn from a combination of tasks like playing Go and play atrai games would likely yield limited benefits.
> L4: I did not find the paper addressing its limitations anywhere. One I will put forth is the lack of insight into the temporal attention module's relationship to different aspects of the problem.
Based on our observations, the trends in attention weights exhibit rapid changes during the early stages of each episode, followed by continuous fluctuations within a narrow range. It is plausible that the agent rapidly infers the necessary skills during the initial phase and subsequently refines the skills it employs based on the discrepancies between execution and prediction. Since our attention mechanism does not directly operate on input observations, it becomes challenging to directly infer what specific aspects the agent is focusing on.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. I've read it, the global pdf, and the other reviews and associated rebuttal discussions.
W2.
I'm not a fan of only providing t-SNE because of utter lack of parameter invariance in qualitative insights. If using sklearn, the documentation [1] shows that the function has the following parameters,
class sklearn.manifold.TSNE(n_components=2, *, perplexity=30.0, early_exaggeration=12.0, learning_rate='auto', n_iter=1000, n_iter_without_progress=300, min_grad_norm=1e-07, metric='euclidean', metric_params=None, init='pca', verbose=0, random_state=None, method='barnes_hut', angle=0.5, n_jobs=None)
and parameters, such as perplexity, that greatly affect the visualization are not varied or expressly detailed.
Regardless, the clusters are distinct though the paper would be stronger if a reasonable attempt at reproducibility of the t-SNE plots were made possible. It may not be possible to make it completely reproducible, as t-SNE has a non-convex loss function. However, if the only difference is randomization, then with such nice clusters, there should be no real issue getting nice clusters again unless there was an issue acquired nice clusters originally. I'd suggest providing as much information as possible on your process for creating t-SNE plots, as a general practice since it is little effort, and pairing t-SNE with a data visualization method that is less user-manipulatable.
W4.
Regarding works using contrastive learning in MTRL. I apologize, I was over-reaching by phrasing it this way. Instead this work is MTRL using contrastively learned MoE embeddings + context task embeddings. So for learning distinct experts in MoE setting, there are continual learning methods that perform OoD detection prior to creating a new task [3, 4] and/or use similarity measures such as a Kullback-Leibler, Jensen-Shannon loss or Wasserstein distance loss to encourage dissimilarity between experts.
There are also approaches that use contrastive learning on MoEs to learn distinct experts [5].
I don't grasp the point(s) being made in the remainder of this paragraph that is speaking to attention mechanisms.
W5.
Meta-World generalization provides a good estimate for generalization on unseen Meta-World and similar tasks, which are a tiny space of continuous control tasks that are similar to the seen Meta-World tasks.
For other items, I don't have any further comments. Thank you for your work and rebuttal. I will maintain my score as is and encourage the Authors' to consider the Reviewers' feedback and rebuttal discussions in improving their paper.
[1] https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html.
[2] Eysenbach, B., Zhang, T., Levine, S., & Salakhutdinov, R. R. (2022). Contrastive learning as goal-conditioned reinforcement learning. Advances in Neural Information Processing Systems, 35, 35603-35620.
[3] Nagabandi, A., Finn, C., & Levine, S. (2018). Deep online learning via meta-learning: Continual adaptation for model-based rl. arXiv preprint arXiv:1812.07671.
[4] Xu, M., Ganesh, S., & Pasula, P. (2022). Mixture of basis for interpretable continual learning with distribution shifts. arXiv preprint arXiv:2201.01853.
[5] Mustafa, B., Riquelme, C., Puigcerver, J., Jenatton, R., & Houlsby, N. (2022). Multimodal contrastive learning with limoe: the language-image mixture of experts. Advances in Neural Information Processing Systems, 35, 9564-9576.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. I have read the comment, including the papers you mentioned [3, 4, 5].
W2.
We use the following instruction to get t-SNE visualization and we will add this to the appendix in the revised version:
```
tsne = sklearn.manifold.TSNE(n_components=2, init ='pca', random_state=40)
```
For other parameters that might affect the visualization, we used the default values. Furthermore, we have also altered the `random_state` to visualize the data again, and we still obtained nice clusters.
W4.
In [3, 4], apart from employing the mixture of models, there seems to be no explicit demonstration of constraining dissimilarity between modules, which presents a significant contrast with our approach. While in other domains, the use of similarity measures to encourage dissimilarity between experts does indeed share a similar motivation with our work, the implementation methods differ from ours.
As for [5], it belongs to the domain of multi-modal. The purpose of utilizing contrastive learning is to align image and text representations, treating corresponding $Z_{text}$ and $Z_{image}$ as positive pairs. So it cannot be directly applied to MTRL as its purpose and implementation details are distinct from ours(we use the output of current time-step and next time-step of the same expert as postive pairs instead).
W5.
Indeed, Meta-World tasks do exhibit certain limitations; However, at present, there is no superior benchmark available for MTRL. In the future, when new benchmarks become available, it might be possible to further validate our approach on them.
[3] Nagabandi, A., Finn, C., & Levine, S. (2018). Deep online learning via meta-learning: Continual adaptation for model-based rl. arXiv preprint arXiv:1812.07671.
[4] Xu, M., Ganesh, S., & Pasula, P. (2022). Mixture of basis for interpretable continual learning with distribution shifts. arXiv preprint arXiv:2201.01853.
[5] Mustafa, B., Riquelme, C., Puigcerver, J., Jenatton, R., & Houlsby, N. (2022). Multimodal contrastive learning with limoe: the language-image mixture of experts. Advances in Neural Information Processing Systems, 35, 9564-9576. | Rebuttal 1:
Rebuttal: The PDF here includes our ablation experiments on the number of experts and the t-SNE visualization of CMTA attention weights of different tasks.
Pdf: /pdf/f9aa8e1e49041a0fb8a66596d97ced199ac74490.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper focuses on multi-task reinforcement learning. Motivated by the negative task transfer within each task, the paper proposes the Contrastive Modules with Temporal Attention (CMTA) method, which utilizes temporal attention to modulate the weights of experts. Concretely, temporal attention takes recent history as input to capture local information. The proposed CMTA is evaluated in MetaWorld and shows superior performance than baselines.
Strengths: - The methodology part is clear and well-written.
- The methodology is well-motivated.
Weaknesses: - The paper poses the multiple skills that may be utilized in each task as negative transfer problems within a task. I believe such terminology is ok but I would highly recommend including a session in related work summarizing the skill-based RL literature and pointing out the deep connection between “negative transfer within a task” and “skill-based RL” in the introduction part.
- The novelty of the paper is limited from my point of view.
- The experiment part lacks necessary information and please refer to the questions. The ablation study can be further improved by analyzing the impact of history length in the temporal attention mechanism, as well as the number of experts.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - What is the number of experts in the mixture for each experiment setting, including MT10-Fixed, MT10-Mixed, MT50-Fixed, MT50-Mixed?
- How is the smooth curve calculated exactly? The meaning of the smooth factor in line 242 is unclear.
- Why do the performances drop for MT50-Mixed after 1.5 Million steps (figure 3 the right most figure MT50-Mixed)?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: - The limitation is not discussed in the conclusion. One trade-off is the increased model size due to experts and the improved performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Our gratitude goes to the reviewer for their insightful comments, which have significantly enhanced the quality of our work. Any further discussion will be appreciated.
> W1: The paper poses the multiple skills that may be utilized in each task as negative transfer problems within a task. I believe such terminology is ok but I would highly recommend including a session in related work summarizing the skill-based RL literature and pointing out the deep connection between “negative transfer within a task” and “skill-based RL” in the introduction part.
We appreciate the reviewer's suggestions. We think that "skill-based RL" indirectly addresses the issue of "negative transfer within a task" by employing distinct skills. However, its primary focus lies in using hierarchical reinforcement learning to tackle intricate tasks that are challenging for a single policy, or inducing diverse skills through reward shaping during pretraining to adapt swiftly to new tasks. We will incorporate this analysis in the forthcoming revised version of the paper.
> w2: The novelty of the paper is limited from my point of view.
The novelty of this paper is primarily twofold: contrastive learning and temporal attention. These address two aspects of non-modularity in current MTRL methods. Both of these aspects were acknowledged in the "Originality" by Reviewer 4rda. In the paper, we conducted ablation experiments on the introduced components individually (as shown in Table 2 and Figure 4). These experiments demonstrated the effectiveness of both components, resulting in performance improvements of approximately 5% and 20%, respectively. Based on the summary part, it appears that the reviewer might have mainly focused on the second aspect, potentially leading to confusion regarding the novelty of the paper.
> W3: The experiment part lacks necessary information and please refer to the questions. The ablation study can be further improved by analyzing the impact of history length in the temporal attention mechanism, as well as the number of experts.
- number of experts: The ablation result can be seen in the pdf of global response. The experimental results indicate that having fewer experts leads to performance degradation. However, increasing the number of experts beyond a certain point does not yield positive effects.
- history length: During the data collection process, we store both the current hidden state and the next hidden state . Consequently, each sample comprises the elements (s, a, r, s', h, h'). This structure enables our temporal attention mechanism to effectively encompass the historical information of the entire trajectory up to the current time step. Notably, the history length is not a hyperparameter here, which makes conducting ablation experiments on it potentially unnecessary.
> Q1: What is the number of experts in the mixture for each experiment setting, including MT10-Fixed, MT10-Mixed, MT50-Fixed, MT50-Mixed?
We use 6 experts for all settings, as mentioned in the Appendix D.
> Q2: How is the smooth curve calculated exactly? The meaning of the smooth factor in line 242 is unclear.
Thanks for bringing this omission to our attention. We will certainly address this in the revised version by including the calculation method for the smooth curve:
$$smoothed\\_point[i] = \\begin{cases} smoothed\\_point[i-1]* factor + point[i] *(1-factor), & \\text{if $i$ > 0} \\\\ point[i], & \\text{if $i$ = 0} \\end{cases}$$
> Q3: Why do the performances drop for MT50-Mixed after 1.5 Million steps (figure 3 the right most figure MT50-Mixed)?
All algorithms exhibit this phenomenon in the MT50-Mixed environment, and we believe this is an inherent issue with the mixed environment itself. Most of the baselines experience performance degradation around 0.5 million steps, whereas our method's performance decline begins at 1.5 million steps, highlighting its robustness. We speculate that the reason for this performance drop lies in the more diverse nature of the MT50-Mixed environment. Overtraining can lead to model overfitting, causing it to prioritize simpler tasks over more challenging ones. Our observations of individual task success rates support this, as some tasks indeed show declines in success rates during later stages. In MT10-mixed, among the 10 tasks, only one task experiences a decline in performance during the later stages(Appendix C, Figure 6, push-v1). However, in MT50-mixed, among the same set of 10 tasks, 5 tasks demonstrate performance drops, and these declines occur earlier in the training process compared to MT10-mixed. Given that our x-axis represents steps for each task, the training data volume for MT50 is five times that of MT10, which can lead to quicker overtraining issues.
---
Rebuttal Comment 1.1:
Title: Rebuttal Response
Comment: I thank the authors for the detailed response, which helped improve the paper's clarity. I updated my score accordingly. | null | null | null | null | null | null |
Rank-N-Contrast: Learning Continuous Representations for Regression | Accept (spotlight) | Summary: The paper proposes the framework Rank-N-Contrast in order to learn regression-aware feature representations. Authors claim that this representation learning mechanism captures the continuous nature of sample orders and helps achieve better performance in downstream regression task.
Strengths: 1. The paper provides both experimental results and theoretical proof to support their method.
2. The paper includes relatively comprehensive ablation study and analysis section.
Weaknesses: 1. The paper is in lack high-level intuition and in-depth analysis of why the method work and only have low-level interpretation on the experimental results. Following are some example questions I hope authors could have addressed (and go one-step further than just stating their claims): Why preserving order in feature representation help to learn continuous nature? In fig. 3, why discernible pattern is better? What are the theorems trying to convey (my understanding is that feature embedding follows order inherited from the labels)?
2. Although authors have conducted experiments on 4 different datasets (where is the results of IMDB-WIKI b.t.w.?) and with 7 different regression losses, the datasets mostly have scalar responses (except for MPIFaceGaze) and seem repetitive to me. I think the experimental results will be more convincing if authors include experiments with higher-dimensional responses.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See Weakness section.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: I don't think authors have adequately addressed their limitations and weaknesses in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 2Kp1,
Thank you for your valuable questions and thoughtful feedback. Your comments have helped us to further improve the quality of our paper. However, we believe that there are several **important misunderstandings** which we would like to clarify and address point-by-point. We hope you could re-evaluate the paper based on our clarifications, and would really appreciate it if you could consider updating the review, and raising your score accordingly.
> *Why preserving order in feature representation help to learn continuous nature?*
The continuous nature in data means that the target value is continuous, and thus the target distances between different data samples have **orders**. This intuition motivates us to learn a representation that preserves the order in the label space.
In our **Global Response**, we provide a theoretical analysis based on Rademacher Complexity to demonstrate that a feature embedding that preserves the order in the label space enables a better **generalization bound**, leading to better regression performance. Further, in our experiments, the superior performance (Sec. 5.1), as well as the improved data efficiency, robustness to spurious targets and data corruptions, and generalization to distribution shifts (Sec. 5.2) confirm that preserving order in the feature space *indeed* helps to learn the continuous nature of regression tasks.
To summarize, we’ve justified both **theoretically** and **empirically** that preserving order in feature representation helps to learn continuous targets.
> *In fig. 3, why discernible pattern is better?*
At a high level, Fig.3 illustrates the **correlation** between feature similarities and label similarities for both RnC and L1. Each entry (i,j) in the matrix refers to the feature similarity between samples i and j. Samples in the rows and columns of the matrix are ordered according to increased label values, i.e., the first row/column refers to the sample with the smallest label and the last row/column to the sample with the largest label. Thus, the samples that are most similar in the label space are the ones along the diagonal and the least similar in the label space are the two corners farthest away from the diagonal, with other points between the diagonal and these two corners gradually decreasing in their similarity in the label space. The **more** the feature similarity pattern (i.e., the color pattern in the figure) follows the label similarity pattern (described above), the **higher** the correlation between the similarity in the feature space and the label space. We will make them clearer in the revised paper.
> *What are the theorems trying to convey?*
The theoretical analysis is trying to prove that optimizing the proposed RnC loss will lead to an ordered feature embedding from a theoretical perspective. Thus, we first formalize the description of ordered feature embedding as $\delta$-ordered (Definition 1), and then demonstrate that when the proposed RnC loss is optimized to be sufficiently low, the feature embedding will be $\delta$-ordered (Theorem 3). We also prove that the proposed RnC loss is able to be optimized to be low enough (Theorem 2). Besides, as highlighted in the global response, we further prove that learning $\delta$-ordered feature embeddings can indeed lead to better regression performance based on Rademacher Complexity. Combining all above, we conclude that optimizing the RnC loss will lead to better regression performance.
> *Although ..., the datasets mostly have scalar responses (except for MPIFaceGaze) and seem repetitive to me. I think ... experiments with higher-dimensional responses.*
We want to clarify that the datasets we included in the experiments are **not** repetitive. To comprehensively evaluate the performance over regression tasks, we should not only take into consideration the output dimension, but also other important factors, such as application domain and input dimensions/modalities. The datasets in our evaluation are carefully selected considering all these factors together. They cover:
- Diversity in **application domains**: computer vision (AgeDB), human-computer interaction (MPIIFaceGaze), healthcare (TUAB) and weather monitoring (SkyFinder).
- Diversity in **input dimensions/modalities**: 2D images (AgeDB, MPIIFaceGaze, SkyFinder), and 1D time series (TUAB).
- Diversity in **output dimensions**: scalar values (AgeDB, TUAB, SkyFinder) and higher-dimensional vectors (MPIIFaceGaze).
Please note that the level of diversity in our datasets is significantly high. Nonetheless, if the reviewer has specific suggestions on datasets with higher output dimensions, we are happy to include more results.
> *Where is the results of IMDB-WIKI?*
We apologize for the confusion. As we mentioned in Appendix B, we used IMDB-WIKI only for the **analysis**: testing our method’s resilience to reduced training data, performance on transfer learning, and the ability to generalize to unseen targets. We didn’t include it in the main results because we already incorporated AgeDB in the main results for the task of age estimation from face images; in addition, the age labels in AgeDB have been manually cleaned by other researchers while the age labels in IMDB-WIKI contain noise [1]. We will make this point clearer in the revised paper and properly refer to them in the main text.
[1] Moschoglou et al. AgeDB: the first manually collected, in-the-wild age database. CVPR Workshop 2017.
---
We hope the above clarifications have addressed all of your concerns, and made you more confident about the novelty, significance, and completeness of our paper. If you have more questions or suggestions, please do not hesitate to discuss with us. We thank you again for your time and feedback. We hope you could re-evaluate the paper based on our clarifications, and would really appreciate it if you could consider updating the review, and raising your score accordingly.
---
Rebuttal Comment 1.1:
Title: Raise score to 6
Comment: Thanks for the authors making clarifications and further explaining the details of the experiments. After carefully reading the authors' rebuttal and other reviewers' comments, I am generally satisfied with the responses to my concerns. I decide to raise my score to 6.
---
Reply to Comment 1.1.1:
Title: Thank you very much for your feedback
Comment: Thank you very much for your feedback. We are glad to learn that our response has addressed your concerns and that you decided to raise your score to 6. We noticed however that the score and review haven’t yet been updated. Thus, we would like to kindly request that you update your score and review in the system to reflect your decision to raise the score to 6.
Once again, thank you very much for your time and effort, and please do not hesitate to let us know if you have any further questions or comments about the paper. | Summary: The paper introduces a deep learning method, Rank-N-Contrast, for regression tasks. This method aims to capture continuity in data, something existing methods struggle with. The authors define a concept of $\delta$-ordered feature embedding and show theoretically that if Rank-N-Contrast loss is minimized, feature embeddings will be $\delta$-ordered. The proposed method excels in several regression tasks.
Strengths: This paper provides a novel perspective to regression tasks using deep learning models, a well-studied and widely-acknowledged problem setting. The paper's unique approach lies in its application of contrastive learning methods, which is a departure from conventional techniques that alter the loss function or incorporate specific regularization.
The paper sets itself apart by offering an extensive comparison to existing methodologies, using both theoretical and empirical results from regression and representation learning tasks. This comprehensive approach underlines the proposed method's potential and relevance.
The ubiquity of regression tasks in the realm of deep learning models implies that this paper's approach could contribute meaningfully to the field. Furthermore, the paper is well-composed, making it easy for readers to grasp the fundamental concept of the study.
Weaknesses: The paper positions "regression-aware representation" as vital, but a slight disparity between the theoretical and experimental results is apparent.
Theoretically, it suggests that a deep learning model can fulfill "regression-aware embedding" characteristics by minimizing the RNC loss appropriately. Experimentally, the paper exhibits consistent performance enhancements over existing approaches in both regression tasks and representation learning frameworks. Furthermore, it confirms that the implementation of RNC provides resilience against data corruption, improves performance with limited training data, and boosts transfer learning.
While the paper implies the potential to secure $\delta$-ordered feature embeddings by reducing RNC loss, and evidences steady performance improvement in real-world data contexts, the precise relationship between the continual characteristics of the "regression-aware representations" and the observed performance improvements is not fully clear yet. Additional discussion around this subject could offer more insights and, thereby, enrich our comprehension.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. **About Embedding Dimensions:** The learned representations in Fig. 1 are impressive, but the embedding dimension isn't specified. Were these embeddings trained directly in two dimensions, or were they first embedded in a high-dimensional space and then visualized with methods like UMAP? If UMAP was used, are the apparent clustering and lack of continuity in L1 and SupCon embeddings an artifact of the visualization process, or do these characteristics persist in the high-dimensional embeddings?
2. **About the Construction of RNC loss:** In l.104, just before Equation 1, it's stated that the normalized likelihood of $v_j$ "can be written as …" Is Equation 1 derived from some definition or axiom? At least the normalized likelihood is “defined” in the reference [41].
3. **About Trainability:** Theorem 3 claims that if RNC loss is sufficiently minimized, a desirable representation can be obtained. But how much can RNC loss be practically minimized? Can any insights be provided from a theoretical standpoint on the trainability of the proposed loss?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: One potential limitation is the difference in the number of training epochs between the one-stage and two-stage methods. According to Appendix l.629, the one-stage methods were trained for 400 epochs, whereas the two-stage methods, including the proposed approach, were trained for a total of 500 epochs. This longer training period for the proposed method could potentially be contributing to the improvements observed in table 2. A further analysis or an ablation study might help to confirm whether these enhancements are genuinely due to the method itself and not merely a consequence of the extended training period.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer dBkT,
Thank you very much for acknowledging the novelty and the contributions of our work. We sincerely appreciate the time and effort you have dedicated to evaluating our work. In the following, we address your concerns in detail.
> *The precise relationship between the continual characteristics of the "regression-aware representations" and the observed performance improvements is not fully clear yet. Additional discussion around this subject could offer more insights and, thereby, enrich our comprehension.*
Thanks for the valuable suggestion. Indeed, theoretical connections **do** exist between regression-aware representations and the observed enhancement in performance: In our **Global Response**, we provide a theoretical analysis based on Rademacher Complexity to show that ***$\delta$-ordered feature embedding leads to better generalization bound***. We will expand on the analysis therein and include it in the revised manuscript as a new theorem following Theorem 3.
> ***About Embedding Dimensions**: The learned representations in Fig. 1 are impressive, but the embedding dimension isn't specified. Were these embeddings trained directly in two dimensions, or were they first embedded in a high-dimensional space and then visualized with methods like UMAP? If UMAP was used, are the apparent clustering and lack of continuity in L1 and SupCon embeddings an artifact of the visualization process, or do these characteristics persist in the high-dimensional embeddings?*
Thanks for pointing this out, and we apologize for the confusion. The embeddings in Fig.1 are first embedded in a 512-dimensional embedding space, then visualized using UMAP. The clustering and lack of continuity are **unlikely** to be artifacts, because if the points are clustered / far apart in the UMAP visualization, it suggests that those points were close to / distant from each other in the high-dimensional space as well [29].
Furthermore, we calculated the Spearman’s rank correlation coefficient and the Kendall rank correlation coefficient between label similarities and feature similarities on that dataset in **Sec. 3 - Feature Ordinality**, where the feature similarities are computed from the **original** 512-dimensional feature vectors. The results in Table 1 confirm that the feature similarities learned by our method have significantly higher correlations with the label similarities than those by the L1 loss, which further verifies that the embeddings learned by RnC are indeed more continuous. We will make these points clearer in the revised paper.
> ***About the Construction of RNC loss**: In l.104, just before Equation 1, it's stated that the normalized likelihood of v_j "can be written as …" Is Equation 1 derived from some definition or axiom? At least the normalized likelihood is “defined” in the reference [41].*
Thank you for the great question. As stated in l.99, the **form** of the normalized likelihood is commonly used by related metric learning literatures [16, 43], where the likelihood is modeled to increase exponentially with respect to the feature similarity. Furthermore, we are inspired by [41] – where the denominator contains a **subset** of samples for ranking purposes – to introduce an adaptive set $\mathcal{S_{i, j}}$ that contains the samples of higher rank than $v_j$ given $v_i$, and define a **customized** likelihood $\mathbb{P}(v_j | v_i, \mathcal{S_{i, j}})$ accordingly that is suitable for our problem settings.
We apologize for the potential confusion and will make these points clearer in the revised paper.
> ***About Trainability**: Theorem 3 claims that if RNC loss is sufficiently minimized, a desirable representation can be obtained. But how much can RNC loss be practically minimized? Can any insights be provided from a theoretical standpoint on the trainability of the proposed loss?*
Thanks for the insightful comment. Actually, Theorem 2 proves that the RnC loss can be *arbitrarily* close to its lower bound, which means that the RnC loss can be sufficiently minimized for any $0 < \delta < 1$ from a **theoretical** perspective.
However, in practice, for any loss function, how much the loss can be minimized **empirically** depends a lot on the model, task, and data, such as whether the model capacity is large enough, whether the input contains sufficient information about the task, and whether the label is clean or not. Thus it usually cannot be *simply and/or universally* guaranteed. We will make a remark in the revised paper to discuss this point.
> *One potential limitation is the difference in the number of training epochs between the one-stage and two-stage methods .... A further analysis or an ablation study might help to confirm whether these enhancements are genuinely due to the method itself and not merely a consequence of the extended training period.*
Thanks for pointing out the difference between training epochs. In fact, we have already included that ablation study in **Appendix F.4 of the submission**. In this section, we adopted the two-stage training scheme for each of the one-stage methods, i.e., training the predictor for 100 more epochs on top of the encoder which was trained for 400 epochs. The results show that the two-stage training scheme does **not** help improve the performance of those one-stage methods, which further validates that the benefit of RnC stems from the proposed loss function rather than the training scheme / number of training epochs.
---
We thank you again for your time and feedback. We hope that our response has adequately answered your questions, and would lead to a favorable increase of the score. We are happy to discuss more if you have any further questions.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for giving us comprehensive responses to all the questions. Roughly all my questions were addressed by the authors' feedback. I would leave one point of slight concern.
Regarding the use of UMAP visualization, the authors stated as follows
> The clustering and lack of continuity are unlikely to be artifacts, because if the points are clustered / far apart in the UMAP visualization, it suggests that those points were close to / distant from each other in the high-dimensional space as well [29].
I would like to leave a different point of view on this comment. Indeed, dimensionality reduction methods such as UMAP are designed to preserve distance relationships in the original space as much as possible. In practical situations, however, UMAP visualization can often reveal superficially spurious cluster structures, even in artificial data that is inherently random and structureless. Since such a phenomenon depends on the number and dimension of the data and the hyperparameters of the UMAP, it would be good to have a simple ablation study that would eliminate such a possibility.
Overall, my assessment of this paper remains the same. This paper is well-written, and its value is clear. I believe it is well worthy of being accepted, while one minor point I mentioned above still remains. Therefore, I would like to keep my initial score.
---
Reply to Comment 1.1.1:
Title: Further response to the UMAP question
Comment: Thank you for your feedback and for your opinion that the paper is worthy of being accepted. As suggested by the reviewer, we will include an ablation study of the number / dimension of the data and the hyperparameters of the UMAP in the revised paper. Here, due to the limited time, we provide some preliminary results that show the structure is **not** due to artifacts:
- **Ablation of the number of data samples**: In Fig. 6 of the main paper, we show the UMAP visualization using a 10-webcam subset of SkyFinder dataset, whereas Fig. 1 uses the full dataset, which contains 44 webcams. The structure of the visualization in Fig. 6 is consistent with Fig. 1, indicating that the difference in their number of data samples did not eliminate the structure.
- **Ablation of the dimension of feature embeddings**: In the PDF attached in the Global Response, we generated the same plots for L1 and RnC as in Fig. 1, but using ViT-Small as the backbone, whose feature dimension is 384. The structure of the visualization in the PDF is consistent with Fig. 1, despite their difference in the number of dimensions (384 vs. 512).
- **Ablation of the hyperparameters of UMAP**: Following your suggestions, we explored the impact of major UMAP hyperparameters, such as the number of neighbors (`n_neighbors`) and the minimum distance between embedded points (`min_dist`). Specifically, we tried `n_neighbors` from {5, 10, 20, 50, 100} and `min_dist` from {0, 0.25, 0.5, 0.8, 0.99} (In Fig. 1 we use the default parameters in UMAP, where `n_neighbors = 15` and `min_dist = 0.1`). Unfortunately, we are not able to include the figure at this stage, however, we would like to let you know the structures of visualizations are still consistent with Fig. 1 among all of these hyperparameters: L1 and SupCon embeddings are fragmented while RnC embeddings are continuous.
Once again, we thank the reviewer for the constructive feedback and insightful suggestions. We hope the above results will address your concerns and help you be more confident about our paper. We will stress this point and provide a comprehensive ablation study in the revised paper. | Summary: The authors present a new loss for representation learning in regression, RNC (Rank-n-contrast). RNC can be seen as the SupCon loss adapted to the regression setting, where the labels given are not hard class labels but rather continuous regression labels. In SupCon the negatives for each example are members of other classes and the positives come from members of the same class. In RNC positive pairs are formed with every example to the anchor, and the negatives are those examples with larger label distances to the anchor than the positives. By doing this they are able to train a representation for the data that is continuous in nature, which is hypothesized to represent the data better.
Strengths: 1. I believe the method is sufficiently original as a nontrivial adaptation of the SupCon methodology to the regression setting. The designation of what the positives and negatives are for the RNC contrastive loss are well motivated and I can see why the loss would give the kind of continuous representations that it does.
2. The method seems to be pretty simple to implement. I particularly like that the RNC loss is the only loss the authors pretrained with and it wasn't some highly tuned composition of many different losses. It gives me more confidence that the RNC loss is providing the gain in performance.
3. The theory is simple and well-motivated in that it proves the loss is doing what the authors are claiming it does. The theory combined with the visualization of the representations give me confidence that the representations are in fact ordered continuously.
4. The experimental evaluation is fairly thorough. The standard questions are answered, i.e. standard deviation error bars, how does it perform against other pretraining tasks, how does a plain two-stage training compare, whether augmentation is important, etc.
Weaknesses: 1. I think the experimental evaluation can be improved to improve the generality of the method. Currently the datasets that are evaluated are on computer vision regression tasks, which confines the evaluation to just ResNet-based models when we compare against the different losses and training methods in the paper. In particular, one area where regression tasks are popular is the tabular setting. I would be very interested to see a comparison between RNC and tree-based methods on popular tabular methods. For examples of models and datasets to compare on the authors can check https://arxiv.org/abs/2106.11959 (not my paper btw)
2. It is great to see that the method is able to consistently perform well when compared against other baselines. But as someone who is not familiar with these particular regression tasks I am not sure whether the magnitude of improvement is substantial. Could the authors help me understand the scale of improvement?
3. Returning to my point on how the evaluation is restricted to ResNet-based architectures, I wonder if given that ResNet was designed to classify images that it has some inductive bias on clustering sets of images based on semantic similarities, which is what causes the disjointed representations in Figure 1 (left). Expanding past the computer vision domain would be very valuable in making the contribution more impactful and general.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions and possible directions of improvement are listed under "Weaknesses".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I believe this was sufficient.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer wKSy,
Thanks for the constructive comments and insightful feedback. We are glad that you found the method novel and simple to implement, the theory simple and well-motivated and the evaluation thorough. Here we address your concerns one by one.
> *Currently the datasets that are evaluated are on computer vision … I would be very interested to see a comparison between RNC and tree-based methods on popular tabular methods.*
First, we would like to clarify that **not** all datasets included in our evaluation are image datasets - TUAB is a dataset consisting of EEG signals (i.e., time-series data). Besides, our method is **not** restricted to ResNet, but rather applicable to various deep architectures, including Transformer-based architectures. We chose ResNet as our main backbone because ResNet is the most commonly used architecture for mainstream regression methods and tasks [1, 2, 3, 4]. Here we further ran experiments with **ViT-Small** as the backbone on two datasets. The results (metric: MAE) are shown in the table below.
||AgeDB|SkyFinder|
|-|:-:|:-:|
|L1|10.18|3.63|
|**RnC(L1)**|**9.58**|**3.50**|
Note that the performance with ViT could be worse than that with ResNet since ViT typically requires larger amounts of training data and usually performs worse on smaller datasets [5]. Nevertheless, RnC(L1) still performs *significantly better* than L1, showing the generality of RnC.
Besides, following the reviewer’s suggestion, we further conducted experiments on a **tabular** dataset and compared RnC with tree-based methods and competitive deep models. Specifically, we followed the same evaluation protocols in [6] and evaluated on a subset (10% random subsampled due to time and computation limit) of Microsoft (MI, search queries) dataset [7]. The results are shown in the table below.
|Method|RMSE|
|-|:-:|
|CatBoost|0.758 $\pm$ 4.8e-4|
|XGBoost|0.756 $\pm$ 6.7e-4|
|ResNet|0.762 $\pm$ 5.3e-4|
|**RnC(ResNet)**|0.756 $\pm$ 6.4e-4|
|FT-Transformer|0.760 $\pm$ 8.9e-4|
|**RnC(FT-Transformer)**|**0.753 $\pm$ 7.7e-4**|
The results show that applying RnC to the deep models *significantly improved* their performance and allowed them to *outperform* the popular tree-based methods on the tabular dataset. We will add the above results and discussions in the revised paper.
> *As someone who is not familiar with these particular regression tasks I am not sure whether the magnitude of improvement is substantial. Could the authors help me understand the scale of improvement?*
Thanks for the great question. Here we discuss the improvement scale for each dataset:
- **AgeDB**: The incorporation of RnC reduces the prediction error by 5.8% on average for all regression methods in Table 2, and Table 3 shows that our performance gain against the best SOTA method is 0.31 years, while the performance gap between the best and the second-best SOTA method is 0.02 years.
- **TUAB**: The incorporation of RnC reduces the prediction error by 9.3% on average for all regression methods in Table 2, and Table 3 shows that our performance gain against the best SOTA method is 0.31 years, while the performance gap between the best and the second-best SOTA method is 0.05 years.
- **MPIIFaceGaze**: The incorporation of RnC reduces the prediction error by 11.7% on average for all regression methods in Table 2, and Table 3 shows that our performance gain against the best SOTA method is 0.18 degrees, while the performance gap between the best and the second-best SOTA method is 0.05 degrees.
- **SkyFinder**: The incorporation of RnC reduces the prediction error by 7.0% on average for all regression methods in Table 2, and Table 3 shows that our performance gain against the best SOTA method is 0.06 degrees Celsius, while the performance gap between the best and the second-best SOTA method is 0.01 degree Celsius.
We believe that the performance gain from the proposed RnC framework is *significant* and *substantial*, and we will expand upon these explanations in the revision.
> *I wonder if given that ResNet was designed to classify images that it has some inductive bias on clustering sets of images based on semantic similarities, which is what causes the disjointed representations in Figure 1 (left).*
First, we would like to clarify that the three plots in Fig.1 are generated with the **same** architecture and only differ by the **losses** used to train them, which indicates that it is the different losses that lead to different structures (continuous or fragmented) in the representations. Second, regarding the reviewer’s concern on whether it is the inductive bias in ResNet that leads to the fragmented representations in Fig.1, we generated the same plots for L1 and RnC using **ViT-Small** as the backbone (**Please see the PDF attached in the Global Response**), which reveals similar structures and verifies that the presence of fragmented representations in existing general regression learning schemes stems from their inability to capture the underlying continuous order between samples.
[1] Yang et al. Delving into deep imbalanced regression. ICML 2021.
[2] Gong et al. Ranksim: Ranking similarity regularization for deep imbalanced regression. ICML 2022.
[3] Zhang et al. Improving deep regression with ordinal entropy. ICLR 2023.
[4] Engemann et al. A reusable benchmark for brain-age prediction from M/EEG resting-state signals. NeuroImage 2022.
[5] Zhu et al. Understanding Why ViT Trains Badly on Small Datasets: An Intuitive Perspective. Arxiv 2023.
[6] Gorishniy et al. Revisiting Deep Learning Models for Tabular Data. NeurIPS 2021.
[7] Qin et al. Introducing LETOR 4.0 datasets. Arxiv 2013.
---
We hope our response has thoroughly addressed your concerns, and would really appreciate it if you could consider raising your score accordingly. If you have any further questions or suggestions, please do not hesitate to share them. We are eager to engage in further discussions with you.
---
Rebuttal Comment 1.1:
Title: The rebuttal has addressed my concerns.
Comment: I have raised my score accordingly. | Summary: The authors discuss the benefits of contrastive learning for learning structured representations in a regression setting. While contrastive losses are typically formulated in terms of “similar” and “dissimilar” examples, the authors make use of the extra information conveyed by the continuous target label. They show how adding the RnC contrastive term to training is beneficial for a suite of high-dimensional regression tasks.
Strengths: This is a technically strong paper that, in my opinion, makes a contribution towards the important and understudied problem of representation learning for regression. The qualitative results [Fig 1] are compelling. The proposed method is intuitive and the paper is well written. The theoretical analysis and extensive empirical results suggest to me that the proposed method will be of interest to NeurIPS attendees. I was especially interested in the suggestion that contrastive training yields more robust representations [lines 259--279], although I think further work will be needed in the future to validate this finding in more sophisticated OOD generalization settings.
Weaknesses: To my knowledge the technical contributions here are sound. However, after reading the paper, I found myself a bit concerned about how neural net-based regression models are going to be used in the near future. And I think the paper would benefit from a head-on discussion of the potential ethical issues at play.
Regression is notoriously difficult for neural networks (for example, many baseline methods convert the regression into a classification problem). A technological advance here could open up entirely new application areas. However, the use of datasets involving human faces suggest to me that some of these new applications could bring up new ethical concerns as well. While Predicting age and gaze direction from face images (as done in this paper) might seem reasonable, there are existing critiques in the literature [e.g. https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=1804&context=iplj, https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a, https://dl.acm.org/doi/pdf/10.1145/3375627.3375820] arguing that predicting other target variables, such as a continuous proxy for emotion, are ethically fraught. In the authors’ opinion, are there application areas where RnC should *not* be applied?
I think the paper would benefit from a discussion of these issues. While the lack of such discussion (the broader impact section is, frankly, rather boilerplate) represents a weakness in my view, it probably should not stand in the way of the paper’s acceptance, since it is more of a critique of the whole subfield rather than this specific paper. However, I am going to request an ethics review in order to get a second opinion about this.
The related works are generally well covered. One exception is the recent C-mixup paper [https://arxiv.org/abs/2210.05775], which also uses label similarity in regression to group “similar” data points. However their approach uses these similarities for data augmentation rather than contrastive learning. A discussion of how the two approaches differ would benefit the reader.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * How is the temperature parameter \tau tuned? [line 106]
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations and broader impacts are discussed [Sec. 5.4, App. H]. However, given that images of faces are used frequently in the experiments, I think that a more complete discussion about broader impacts, especially as it relates to task definition for regression, should be included (see “Weaknesses” above and my request for an ethics review below).
Flag For Ethics Review: ['Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer D7S9,
Thank you very much for your valuable feedback. We are delighted to see that you found the method intuitive, the results compelling and the paper well-written, and we wish to express our gratitude for bringing the ethics considerations to our attention. Here we address your concerns one by one.
> *Given that images of faces are used frequently in the experiments, I think that a more complete discussion about broader impacts, especially as it relates to task definition for regression, should be included.*
Thank you for the dedicated discussion about ethical considerations. We certainly agree that these are essential points that deserve being addressed in the main body of the paper. We have **revised our Broader Impact and Limitations sections**, as shown below, and will move them to the **main body** of the revised manuscript. We hope that the revised sections address your concerns.
**Broader Impacts.**
*We introduce a novel framework designed to enhance the performance of generic deep regression learning. We believe this will significantly benefit regression tasks across various real-world applications. Nonetheless, several potential risks warrant discussion. First, when the framework is employed to regress sensitive personal attributes such as intellectual capabilities, health status, or financial standing from human data (like facial images or physiological signals), there's a danger it might reinforce or even introduce new forms of bias. Utilizing the method in these contexts could inadvertently justify discrimination or the negative targeting of specific groups. Second, in contrast to handcrafted features which are grounded in physical interpretations, the feature representations our framework learns can be opaque. This makes it difficult to understand and rationalize the model’s predictions, particularly when trying to determine if any biases exist. Third, when our method is trained on datasets that do not have a balanced representation of minority groups, there's no assurance of its performance on these groups being reliable. It is essential to recognize that these ethical concerns apply to deep regression (and classification) models at large, not solely our method. However, the continuous nature of our representation which facilitates interpolation and extrapolation might inadvertently make it more tempting to justify such unethical applications. Anyone seeking to implement or use our proposed method should be mindful of these concerns. Both our specific method and deep regression models, in general, should be used cautiously to avoid situations where their deployment might contribute to unethical outcomes or interpretations.*
**Limitations.**
*Our proposed method presents some limitations. Firstly, the technique cannot discern spurious or incidental correlations between the input and the target within the dataset. As outlined in the Broader Impact section, this could result in incorrect conclusions potentially promoting discrimination or unjust treatment when utilized to deduce personal attributes. Future research should delve deeper into the ethical dimensions of this issue and explore strategies to ensure ethical regression learning. A second limitation is that our evaluation primarily focuses on general regression accuracy metrics (e.g., MAE) without considering potential disparities when evaluating specific subgroups (e.g., minority groups). Given that a regression model's performance can vary across demographic segments, subgroup analysis is an avenue that warrants exploration in subsequent studies. Lastly, our approach learns continuous representations by contrasting samples against one another based on their ranking in the target space, necessitating label information. To adapt it for representation learning with unlabeled data, our framework will need some modifications, which we reserve for future work.*
We certainly welcome any further suggestions from the reviewer, and are more than happy to incorporate them to make the statements more comprehensive.
> *The recent C-mixup paper also uses label similarity in regression to group “similar” data points. A discussion of how the two approaches differ would benefit the reader.*
Thanks for pointing out the missing reference. We sincerely apologize for the oversight during the submission phase. We will cite and discuss the C-mixup paper in our revised manuscript. Here is a draft of the discussion to be added to the **Related Work** section:
*C-mixup [1] leverages label similarity for regression tasks. Specifically, it adapts the original mixup [2] data augmentation technique for regression learning by adjusting the sampling probability of the mixed pairs according to the label similarities. In contrast, our method contrasts samples against each other based on the rankings of label similarities. It is also worth noting that our method is orthogonal and complementary to data augmentation algorithms for regression, such as C-mixup.*
> *How is the temperature parameter \tau tuned? [line 106]*
As discussed in Appendix E, we performed standard hyper-parameter search for the temperature parameter $\tau$ in {0.1, 0.2, 0.5, 1.0, 2.0, 5.0} and selected one with the best performance, which is 2.0.
[1] Yao et al. C-mixup: Improving generalization in regression. NeurIPS 2022.
[2] Zhang et al. Mixup: Beyond Empirical Risk Minimization. ICLR 2018.
---
We hope our response has addressed all of your concerns and can lead to a favorable increase of the score. Please feel free to let us know if you have other questions or suggestions. We are more than willing to discuss more with you.
---
Rebuttal Comment 1.1:
Title: author rebuttal
Comment: I read the rebuttal and the other reviews. For now I will keep my score the same, which reflects my belief that the paper would be a very nice addition to the conference. If an ethics review is added later I will read and consider its contents. | Rebuttal 1:
Rebuttal: We are grateful to all the reviewers for the time and effort they invested in reviewing our paper. It is heartening to note that the reviewers found:
- The paper addresses an **important** (D7S9), **ubiquitous** (dBkT), and **interesting** (6JYz) problem.
- The proposed method is **novel** (wKSy, dBkT), **well-motivated** (wKSy), **intuitive** (D7S9), and **easy to implement** (6JYz, wKSy).
- The proposed method is **justified theoretically** (6JYz, D7S9, wKSy, dBkT, 2Kp1), and the theories are **well-motivated** (wKSy).
- The empirical evaluations are **comprehensive** (6JYz, D7S9, wKSy, dBkT, 2Kp1), and the results are **compelling** (6JYz, D7S9).
- The paper is **well-written** and **easy-to-follow** (D7S9, dBkT).
We made a concerted effort to provide a comprehensive response to each reviewer, with point-to-point answers following each review. We hope that our response adequately addresses the reviewers’ concerns and would be happy to answer any additional questions you may have.
---
In this **Global Response** section, we would like to answer a common question raised by Reviewers 6JYz, dBkT and 2Kp1:
> *How does the delta-ordered feature embeddings relate to final performance gain for the regression tasks from a theoretical perspective?*
We thank the reviewers for highlighting this insightful question, which helps to further enhance the completeness of our paper. Learning an ordered feature embedding can *indeed* boost the performance of the regression task.
Below, we present an analysis based on Rademacher Complexity [1] to substantiate that ***$\delta$-ordered feature embedding results in a better generalization bound***:
Specifically, regression learning can be formulated as finding a hypothesis $h$ to predict the target $y$ from the input $x$. Suppose there are $m$ data points in the training set $\mathcal{S}=\\{(x_k, y_k)\\}^m_{k=1}$. Let $\mathcal{H}_1$ be the class of all possible functions from the input space to the target space.
If a $\delta$-ordered feature embedding is guaranteed with an encoder $f$ mapping $x_k$ to $v_k$, the set of candidate hypotheses can be reduced to all "$\delta$-monotonic" functions $h(x) = g(f(x))$, which satisfy $\forall i, j$ and $k$, $d(g(v_i), g(v_j)) < d(g(v_i), g(v_k))$ for $s_{i,j} > s_{i,k} + \frac{1}{\delta}$, $d(g(v_i), g(v_j)) = d(g(v_i), g(v_k))$ for $|s_{i, j} - s_{i, k}| < {\delta}$, and $d(g(v_i), g(v_j)) > d(g(v_i), g(v_k))$ for $s_{i,j} < s_{i,k} - \frac{1}{\delta}$, where $d(\cdot,\cdot)$ is the target distance measure and $s_{i, j}$ is the feature similarity between $v_i$ and $v_j$. We denote the class of all "$\delta$-monotonic" functions by $\mathcal{H}_2$. Note that the optimal hypothesis, i.e., $\forall x, y, h^*(x) = y$, is in both $\mathcal{H}_1$ and $\mathcal{H}_2$.
Further, for a hypothesis set $\mathcal{H}_i$, let $\mathcal{A}_i = \\{(l((x_1, y_1); h), ..., l((x_m, y_m); h)): h\in\mathcal{H}_i\\}$ be the loss set for each hypothesis in $\mathcal{H}_i$ with respect to the training set $\mathcal{S}$, where $l$ is the loss function. Let $c_i$ be the upper bound of $|l((x, y); h))|$ for all $x, y$ and $h \in \mathcal{H}_i$.
We introduce the Rademacher Complexity [1] of $\mathcal{A}_i$, denoted as $R(\mathcal{A}_i)$.
Then, from the generalization bound based on Rademacher Complexity [1], we have, with a high probability (at least $1-\epsilon$), the gap between the empirical risk (i.e., training error) and the expected risk (i.e., test error) is upper bounded by $2R(\mathcal{A}_i) + 4c_i\sqrt{\frac{2\ln(4/\epsilon)}{m}}$.
Since $\mathcal{H}_2 \subset \mathcal{H}_1$, we have $\mathcal{A}_2 \subset \mathcal{A}_1$ and $c_2 \leq c_1$, and from the monotonicity of Rademacher Complexity we have $R(\mathcal{A}_2) \leq R(\mathcal{A}_1)$. Hence, with a $\delta$-ordered feature embedding, the upper bound on the gap between the training error and the test error will be reduced, which leads to better regression performance.
---
To put it more intuitively, fitting an ordered feature embedding **reduces** the **complexity** of the regressor, which enables **better generalization ability** from training to testing, and ultimately leads to the **final performance gain**.
Relatedly, we note that the enhanced generalization ability is further **empirically verified** in our paper (see **Sec. 5.2** in our main paper). Specifically, if not constrained, the learned feature embeddings could capture spurious or easy-to-learn features that are not generalizable to the real continuous targets (see **Robustness to Spurious Targets**). Such property also leads to better robustness to data corruptions, better resilience to reduced training data, and better generalization to unseen targets.
To summarize, learning an ordered feature embedding can indeed lead to better performance for regression tasks. We will describe these results formally as a new theorem following Theorem 3 in the revised paper. We believe the new results will make our paper more significant and comprehensive.
[1] Shalev-Shwartz & Ben-David. Understanding machine learning: From theory to algorithms. Cambridge University Press 2014.
---
We hope our response has adequately addressed the reviewers’ question(s), and would really appreciate it if the reviewers could consider raising their scores after reading our response. We are happy to take any further questions from the reviewers.
Pdf: /pdf/29629bec97f9c5c184c367d0e35113d46b5d1b16.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the authors proposed a novel framework that learns continuous representations for regression problems, by contrasting samples against each other based on the rankings induced by the target values. The proposed method is evaluated on several regression tasks and the results show that the proposed method can achieve competitive performance compared to the state-of-the-art methods.
Strengths: 1. This paper tackles a very interesting problem of representation fragmentation in deep regression models. The proposed method is simple and very effective to learn continuous representations that fit the regression task. It is very interesting to see that the traditional methods learn representations clustered based on spurious targets (i.e., camera location in the SkyFinder dataset) while the proposed method learns a nice continuous representation that captures the target values.
2. The authors present a theoretical analysis of the proposed loss function and show that it can learn delta-ordered feature embeddings when sufficiently trained.
3. The proposed loss function is conceptually simple and easy to implement. The proposed method is thoroughly evaluated on several regression tasks and the results show that the proposed method can achieve competitive performance compared to the state-of-the-art methods. The authors further conducted ablation studies to show the effectiveness of the proposed method under data corruption, unseen or spurious targets.
Weaknesses: 1. While it is nice to see the proposed method leads to delta-ordered feature embeddings. It would be nice if the authors can further theoretically connect the delta-ordered feature embeddings to the final performance of the regression task. This would help to justify that the delta-ordered feature embeddings are indeed useful for the regression task.
2. For the remark for Thm 3, to achieve delta-ordered features for the entire dataset, is it that we need to optimize all batches to achieve low enough loss? Is there any guarantee/insight that there is a feasible solution that can achieve low enough loss for all batches?
3. It seems the IMDB-WIKI performance is missing in Table 2.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See the comments in "weaknesses".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 6JYz,
Thanks for your constructive comments and insightful questions. We are delighted to see that you appreciate the contributions of our work. Below, we address your concerns in detail.
> *It would be nice if the authors can further theoretically connect the delta-ordered feature embeddings to the final performance of the regression task.*
Thank you for the valuable suggestion. The delta-ordered feature embeddings can *indeed* be theoretically connected to the final performance: In our **Global Response**, we provide an analysis based on Rademacher Complexity to show that ***$\delta$-ordered feature embedding leads to better generalization bound***. We will expand on the theoretical analysis therein and include it as a new theorem following Theorem 3 in the revised paper.
> *For the remark for Thm 3, to achieve delta-ordered features for the entire dataset, is it that we need to optimize all batches to achieve low enough loss? Is there any guarantee/insight that there is a feasible solution that can achieve low enough loss for all batches?*
Thanks for the great question. We do not need to conduct optimizations for all batches, which is also practically impossible. In fact, one should consider the training process as a cohesive whole, which is effectively optimizing the **expectation** of the loss over all possible random batches. In addition, Markov's inequality [1] guarantees that when the expectation of the loss is optimized to be sufficiently low, the loss on any batch will be low enough with a high probability. We will add this point to the revised paper.
> *It seems the IMDB-WIKI performance is missing in Table 2.*
We apologize for the confusion. As we mentioned in Appendix B, we used IMDB-WIKI only for the **analysis**: testing our method’s resilience to reduced training data, performance on transfer learning, and the ability to generalize to unseen targets. We didn’t include it in the main results because we already incorporated AgeDB in the main results for the task of age estimation from face images; in addition, the age labels in AgeDB have been manually cleaned by other researchers while the age labels in IMDB-WIKI contain noise [2]. We will make this point clearer in the revised paper and properly refer to them in the main text.
[1] Grimmett & Stirzaker. Probability and random processes. Oxford University Press 2020.
[2] Moschoglou et al. AgeDB: the first manually collected, in-the-wild age database. CVPR Workshop 2017.
---
We hope that our response has addressed all of your concerns and offered any needed clarifications, and that you may consider a favorable increase of the score. Please do not hesitate to discuss with us if you have other comments. We are always happy to take any questions or suggestions.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the clarifications. I have read the response and other reviews. I keep my current score unchanged. | null | null | null | null | null | null |
Top-Ambiguity Samples Matter: Understanding Why Deep Ensemble Works in Selective Classification | Accept (poster) | Summary: This paper focuses on why the ensemble method works.
Authors prove that the ensemble has a lower selective risk than the member model for any coverage within a range, based on some assumptions.
Authors further conduct experiments on both computer vision and natural language processing tasks to verify proofs and assumptions.
Strengths: 1. Selective classification in Related Work is summarized well.
Weaknesses: 1. The ambiguity in Section 4 is measured based on which model, since there exit an ensemble and a member model. What is the meaning of "ensembling on high-ambiguity samples?" Do you fine-tune models on these "high-ambiguity samples?" If you fine-tune models based on "high-ambiguity samples", why the ambiguity is measured by another ensemble?
Please clarify the above question.
2. Figure 2 is obtained from which experiments or just a sketch? If figure 2 just a sketch, it cannot be used to verify assumption 1.
Authors should conduct massive experiments/proofs to verify this assumption, instead of using a sketch.
Moreover, what is the definition of "correlated predictive probability distributions?" Please clarify it by using mathematic forms.
What is the mathematical definition of "definite samples"? How a sample can be considered as definite?
What is the difference between "definite samples" and "low-ambiguity samples".
Authors use too many different terms without clear definition.
3. Authors should conduct more experiments to verify assumption 1 on more dataset and more DNNs. Just one experiment cannot verify the convincingness of assumption 1.
Moreover, for the verification of Assumptions 2 and 3, authors should use more architectures as the backbone, and construct more ensembles. Just one kind of backbone, and one ensemble cannot sufficiently verify Assumptions 2 and 3.
4. Authors should conduct experiments to explore whether the number of member models affects the performance of the ensemble.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: 1. Some terms are not defined, and authors do not clarify the difference between different terms, which makes this paper hard to follow.
2.Experimental results are not convincing enough, since authors just use one backbone and one ensemble for each task.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. This paper is the first to provide the theoretical foundation of Deep Ensemble in selective classification. All the other three reviewers agree that our analysis is sound and insightful. Maybe some points in the paper are not well explained, but we can clarify them now. In addition, we encourage you to read the global response to find out if there are some questions you are interested in. In the following, we will answer your concerns point by point.
> The ambiguity in Section 4 is measured based on which model, since there exist an ensemble and a member model.
The ambiguity is measured based on the divergence of the prediction of multiple-member models. It is defined in Line 109 of the paper.
> What is the meaning of "ensembling on high-ambiguity samples?" Do you fine-tune models on these "high-ambiguity samples?"
As defined in Section 4, *ensembling on high-ambiguity samples* is combining the member models (ensemble) on high-ambiguity samples and using a member model on low-ambiguity samples. By construction, this operation is irrelevant to fine-tuning. Mathematically, this operation introduces a model $\tilde{f}_E$ that makes predictions as
- $\tilde{f}_E(x) = f_E(x)$ (echoing the prediction of the ensemble) if $ambiguity(x)\ge threshold$;
- $\tilde{f}_E(x) = f_m(x)$ (echoing the prediction of a member model), otherwise,
where $threshold$ is the median of ambiguities on the dataset. We equate this model with ensembling on high-ambiguity samples in Section 4.
> Figure 2 is obtained from which experiments or just a sketch? If figure 2 just a sketch, it cannot be used to verify assumption 1. Authors should conduct massive experiments/proofs to verify this assumption, instead of using a sketch.
Figure 2 is a sketch, as its caption indicates ("An illustration of the intuition of our analysis."). This figure is not used to verify Assumption 1, but to illustrate the analysis framework. Assumption 1 is supported by the extensive experiments in Section 4.
> Moreover, what is the definition of "correlated predictive probability distributions?" Please clarify it by using mathematic forms.
Here, we use "correlated" to emphasize that we abandon the *uncorrelated-estimation-error assumption* used in the previous work (on analysis of the ensemble method on ordinary classification tasks, mentioned in Related Work). This assumption states that the errors of member models are uncorrelated. In contrast, we do not specify any form of statistical dependency (including correlation) among member models on ambiguous samples. Therefore, we did not provide a mathematical formula for the definition of "correlated predictive probability distribution".
Considering that these words can be misleading, we are replacing these words with "the statistical dependency among member models is unknown on ambiguous samples".
> What is the mathematical definition of "definite samples"? How a sample can be considered as definite? What is the difference between "definite samples" and "low-ambiguity samples". Authors use too many different terms without clear definition.
We use *low/high-ambiguity samples* in experiment analysis and use *definite/ambiguous samples* in theoretical analysis.
In experiment analysis, we measure the ambiguity of a sample as the divergency of the prediction among member models. By defining a threshold $\epsilon$, samples with ambiguity less than $\epsilon$ are *low-ambiguity samples*, and others are *high-ambiguity samples*.
Then, to facilitate the theoretical analysis, we abstract *low-ambiguity samples* as *definite samples* where all member models yield the same predictive probability distribution.
Could you provide some examples of terms that leak definitions to help us improve our paper? The terms questioned in this review are actually defined in the paper:
- Ambiguity: in line 109,
- low/high-ambiguity sample: in lines 113-114,
- ambiguous/definite sample: in lines 145-146, lines 187-189, and in Assumption 1.
> Authors should conduct more experiments to verify assumption 1 on more dataset and more DNNs. Just one experiment cannot verify the convincingness of assumption 1. Moreover, for the verification of Assumptions 2 and 3, authors should use more architectures as the backbone, and construct more ensembles. Just one kind of backbone, and one ensemble cannot sufficiently verify Assumptions 2 and 3.
Thank you for your advice. We recently extended our experiments on the following three dimensions:
1. the model architectures, extended to ResNet, AlexNet, and DenseNet;
2. the datasets, extended to ImageNet100;
3. the number of member models, extended to 20.
The results are presented in Figure 0.2 of the global response. As the figure shows, Assumptions 1, 2, and 3 are verified across various experiment settings. The results indicate that our assumptions might reflect the general characteristics of DNNS. This could be explained from a theoretical view. For example, Assumptions 2 and 3 might be attributed to the low bias of DNNs (from a bias-variance perspective) due to DNNs' large model capacity.
> Authors should conduct experiments to explore whether the number of member models affects the performance of the ensemble.
This result is reported in Figure D.1 in the Appendix in the submitted version. The result shows that as the number of member models increases, the AURC of the ensemble decrease. However, the decrease slows down when the number of member models goes up.
If you also care about whether our assumptions are stable across various numbers of member models, you can see the leftmost column of Figure 0.2 in the global response. This figure justifies the assumptions on an ensemble of 20 VGG16 models on CIFAR10.
---
Rebuttal Comment 1.1:
Comment: I'm thankful of the authors for their rebuttal response and clarifications.
Some of my concerns are addressed.
---
Reply to Comment 1.1.1:
Title: Reply to reviewer's comment
Comment: We are glad to see that some of your concerns are addressed. Will you tell us what concerns remain not resolved? | Summary: This paper aims to investigate why ensemble models perform superiorly compared to member models. The authors conduct an empirical study to demonstrate their assumptions. They separate the data into two categories: high-ambiguity samples and low-ambiguity samples. Their findings reveal that while ensembling high ambiguity samples improve selective risk, the same process for low ambiguity samples yields similar results to those from the member model. Following these observations, they propose several assumptions about the models and samples, demonstrating that under these conditions, ensemble models can achieve a more favorable selective risk. To verify their assumptions and conclusions, the authors conduct additional experiments.
Strengths: 1. This paper offers a theoretical understanding of ensemble models, potentially enlightening future algorithm design.
2. Compared to other theoretical frameworks, the assumptions made in this paper are more realistic.
3. The authors extensively employ empirical experiments to elucidate their motivations and assumptions, enhancing the paper's readability.
Weaknesses: 1. Despite the assumption seeming reasonable, the reviewer does not agree that the proof presented in this paper demonstrates the ensemble model's behavior. The proof heavily relies on the $\lim_{t\to 1^-}$ and the main result necessitates a minimal $\phi_0$ value. Nonetheless, as depicted in Figure 5, even with a significantly large $\phi_0$, the ensemble model outperforms the member model. Consequently, the theory proposed in this paper does not align with the results of their experiments.
2. For a theoretical study, the methodologies employed in this paper lack novelty or intrigue.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: What is the intuition behind defining low-ambiguity and high-ambiguity samples as in Assumption 1? Given this assumption, it appears that a majority of samples would be classified as high-ambiguity.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The theory provides in this paper can not demonstrate the ensemble model behavior, thus limits the impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. Maybe some points in the paper are not well explained. We will clarify them now. In addition, we encourage you to read the global response where there are some common questions you are interested in. In the following, we will answer your concerns point by point.
> Despite the assumption seeming reasonable, the reviewer does not agree that the proof presented in this paper demonstrates the ensemble model's behavior. The proof heavily relies on the $\lim_{t\rightarrow 1}$ and the main result necessitates a minimal value. Nonetheless, as depicted in Figure 5, even with a significantly large $\phi_0$, the ensemble model outperforms the member model. Consequently, the theory proposed in this paper does not align with the results of their experiments.
We have to point out that the theory **does** align with the results. Theorem 1 states that the ensemble should have a lower selective risk (than the member model) when the target coverage is in $(0, \phi_0) \subset (0, 1]$; and the experiment shows that the ensemble has a lower selective risk when the target coverage is in $(0, 1]$. Our analysis does not state that $\phi_0$ has to be small.
Although our proof relies on $\lim_{t\rightarrow 1}$, in experiments, the confidence score distribution concentrates heavily towards 1. Thus, a $t$ that is close to 1 will still result in a large coverage rate. This explains why in experiments we can observe a high coverage where the ensemble model outperforms the member model. We elaborate on this as follows.
As Lemma 1 claims, P(the ensemble yeilding confidence} $\ge t$ | an ambiguous input example) = $O(1-t)^M$ (when $t\rightarrow 1$), where $M$ is the number of ensemble members. Therefore, the ensemble hardly provides an ambiguous sample with confidence close to 1. In contrast, the experiments show that definite samples are always assigned confidence scores that are close to 1 (see the black-edged bars in Figure 4). By this mean, the ensemble stratifies the definite samples and ambiguous samples by their confidence scores, where the definite samples reside on a thin higher layer of confidence than the ambiguous samples (see the rightmost black-edged bars vs. red-edged bars in Figure 4). Combining this stratification with the low risk (of both the member model and the ensemble) on definite samples, when the target coverage is around the proportion of definite samples in the dataset, the selective risk of the ensemble should be lower than that of the member model. Furthermore, considering there are a large number of definite samples (see the heights of black-edged bars in Figure 4), the ensemble model will exhibit lower selective risk than the member model given a considerably large coverage.
In summary, the key factor that leads to the lower selective risk of the ensemble given a large coverage is the experimental fact that the definite samples are large-amounted and always assigned confidence close to 1. This fact is not involved as an assumption in the theory since it seems a very strong assumption (though it holds throughout our experiments). We guess this is attributed to the low bias of DNNs (from a bias-variance perspective), which might be a widespread property of DNNs. Therefore, it would be an interesting direction for future work to strengthen our theory by exploiting this fact.
> For a theoretical study, the methodologies employed in this paper lack novelty or intrigue.
This comment seems vague. The contribution of this paper is that it is the first theoretical study on the ensemble model for selective classification. We provide insights into why the ensemble model performs well, propose reasonable assumptions, and give proof. Could you provide a more detailed comment on why our paper lacks novelty? For example, providing existing similar research results.
> What is the intuition behind defining low-ambiguity and high-ambiguity samples as in Assumption 1? Given this assumption, it appears that a majority of samples would be classified as high-ambiguity.
Do you mean the intuition behind definite samples and ambiguous samples defined in Assumption 1? We encourage the reviewer to refer to our general response first. The intuition is that we use *definite samples* (on which all member models yield the same predictive probability distribution) to approximate *low-ambiguity samples* (on which the ambiguity among member models is less than a threshold $\epsilon$). The approximation is equivalent to neglecting ensembling on low-ambiguity samples. This approximation is safe since the experiment in Section 4 shows that ensembling on low-ambiguity samples contributes minor improvement to the performance of the ensemble. In the attached PDF, we also provide the distribution of ambiguity to show that there are many samples on which the member model predictions are quite similar, which serve as low-ambiguity (definite) samples.
---
Rebuttal Comment 1.1:
Title: Thank you for your reply
Comment: The reviewer would like to point out that for a theoretical study aiming to explain known phenomena, proper modeling of the real problem is the essential part. The authors argue that their main conclusion is correct, but the correctness of this main conclusion is not what makes their theory to be accepted. After all, the experimental results already demonstrate the effectiveness of the ensemble method. In the authors' response, the authors have pointed out that the confidence distribution in their theory is different from the real distribution, and that's why the reviewer does not give a higher score for the paper: it only analyzes the ensemble model on a small part of the real situation ($\lim_{t\rightarrow 1^-}$). The authors' response does not address this concern, so the reviewer would not increase the score.
---
Reply to Comment 1.1.1:
Title: Clarification on reviewer's concern
Comment: The confidence score distribution concentrates heavily towards 1 in experiments (see Figure 4 in the paper). Although our proof relies on $\lim_{t\rightarrow 1}$, a $t$ close to 1 is able to represent a large proportion of samples so that it can cover the result on a large coverage rate.
---
Reply to Comment 1.1.2:
Title: Clarification on reviewer's concern
Comment: Thanks for your comments. If I understand correctly, your main concern is that the theory cannot explain the lower selective risk of the ensemble given a **large coverage**. We admit that this is not covered by our analysis. However, we can fill the gap by an experiment-based clue. The clue is the stratification of definite samples and ambiguous sapmles, which has been provided in our previous response. If you find it hard to follow in the previous response, we briefly summarize it as follows.
As Figure 4 shows, in the right-most bar of each subfigure, the ensemble almost clears out ambiguous samples (as Lemma 1 claims) but reserves the definite samples. In addition, the definite samples are dreadfully concentrated to the right-most bar. Thus, ensembling stratifies the definite samples and ambiguous samples and puts the definite samples on a higher level of confidence scores, prioritizing definite samples to select. Combining this with the large amount and low risk of definite samples, the stratification could explain the lower selective risk of the ensemble given a large coverage.
In summary, the key factor is the distribution of definite samples: a dreadfully concentrated distribution of confidence of definite samples leads to the experimental results. This could not be derived from the theory but we guess this is attributed to the low bias of DNNs (from a bias-variance perspective), which might be a widespread property of DNNs.
---
Reply to Comment 1.1.3:
Title: Rebuttal to the reviewer's comment
Comment: We have to point out that the reviewer misunderstood a key point of our response. The reviewer writes
> In the authors' response, the authors have pointed out that the confidence distribution in their theory is different from the real distribution...
Actually, the real distribution is a special case of Assumption 3. So, by no means, could one claim that the confidence distribution of our theory is different from the real distribution.
Opposite to the reviewer's statement, our previous response only demonstrates the specialty of the real distribution (that is not specified in the theory), refilling the gap between the theory and the practice. | Summary: This paper provides a rigorous analysis of the reason for the success of the ensemble method, which includes both empirical evidence and theoretical proof. They found that the power of the ensemble method comes mostly from top-ambiguity samples where the member model diverges, and they provide theoretical evidence as well by proving that the ensemble has a lower selection risk than its member model in certain cases.
Strengths: The paper is well-motivated and well-written.
The experiments are well-designed and thorough.
The theoretical analysis is well-formulated and sound.
The results are insightful and sensible and provide reassuring evidence of the use of ensemble methods in practice.
Weaknesses: The discussion around Figure 2 is in fact a bit hard to digest, the author might want to think about a better way to convey this explanation.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I don't have any questions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes, the author has addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We encourage you to read the global response to find if there are some common questions you are interested in. In the following, We will answer your concern.
> The discussion around Figure 2 is in fact a bit hard to digest, the author might want to think about a better way to convey this explanation.
This seems a challenging task. It would be favorable if you could provide more specific suggestions. Anyway, though, we will try our best to refine our paper.
---
Rebuttal Comment 1.1:
Comment: I agree that it is hard to make it even more clear, I am fine with leave it as it is. Overall I have no further questions. | Summary: The authors present an analysis of deep ensembles in the context of selective classification, where a classifier has an option to abstain from providing a response in situations where it lacks confidence in its predictions. They prove that under reasonable assumptions, the performance of a deep ensemble in selective classification is guaranteed to beat that of its component members under the zero-one loss. They go on to justify their assumptions and provide experimental evidence for their claims, and finally demonstrate that deep ensembles are competitive with currently used methods in selective classification.
Strengths: - Compelling analysis. The authors provide an insightful analysis of ensemble performance that operates under reasonable assumptions that they later justify. Their approach offers theoretical insight into an interesting application of ensembles, namely selective classification.
- Related work. To my understanding, the authors provide a thorough and easy to understand survey of other methods in the field of selective classification.
Weaknesses: - I found section 4 of the paper to be weaker than some of the other sections. One instantiation of low vs. high ambiguity samples corresponds to having ensemble members which make identical predictions on 90% of the data, and different predictions only on the remaining 10%, which would also explain the results in Figure 1. While such an effect would still be consistent with the analysis that follows, it would be important to report in its own right. I would like to understand what the distribution of ambiguous samples looks like.
- Although illustrative, I do not follow how the results in Figure 5 and Table 1 correspond to the rest of the paper. In particular, how do these results relate to the importance of ambiguous samples in the performance of deep ensembles?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Figure 2 provides a useful intuition for the analysis that follows. Is it possible to provide a version of Figure 2 with real data as a justification of your assumptions as well?
- It may be worthwhile to include references to more recent work in ensemble performance decompositions for metrics besides 0-1 loss:
- https://arxiv.org/pdf/2206.10566.pdf
- https://proceedings.mlr.press/v151/ortega22a.html
- https://arxiv.org/abs/2301.03962,
- https://openreview.net/forum?id=6sBiAIpkUiO
- The choice of confidence estimator seems reasonable, but I could imagine cases where the maximum confidence does not correspond to the chosen output. Out of interest, could the analysis apply to other confidence estimators?
- In the MRPC dataset, the ensemble has higher selective risk than the member at ~30% coverage, as discussed in the text. Is this a violation of assumptions, or is $\phi_0$ lower than $30%$ for this dataset?
- From your analysis, my understanding is that the ensemble should outperform any individual ensemble member (which is better than the average performance, as guaranteed by Jensen for convex losses). Is Figure 5 showing a comparison to the average single model, or the best, or a randomly selected one?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: As discussed in the text, the analysis is limited to selective classification within a range of coverages, and extensions to more general settings are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We encourage you to read the global response to find if there are some common questions you are interested in. In the following, we will answer your concerns point by point.
> I found section 4 of the paper to be weaker than some of the other sections. One instantiation of low vs. high ambiguity samples corresponds to having ensemble members which make identical predictions on 90% of the data, and different predictions only on the remaining 10%, which would also explain the results in Figure 1. While such an effect would still be consistent with the analysis that follows, it would be important to report it in its own right. I would like to understand what the distribution of ambiguous samples looks like.
Excuse me, we find it a little hard to get what your question is. We try to answer the question as follows. The distribution of ambiguities is shown in Figure 0.1 in the global response. As the figure shows, on each dataset, the distribution concentrates on a small interval near 0 as well as exhibits a long tail. The concentration corresponds to the definite samples, and the long tail corresponds to the ambiguous samples. Please comment on this response if you have any further questions.
> Although illustrative, I do not follow how the results in Figure 5 and Table 1 correspond to the rest of the paper. In particular, how do these results relate to the importance of ambiguous samples in the performance of deep ensembles?
Figure 5 can be related to the importance of ambiguous samples as follows:
1. the importance of ambiguous samples motivates Assumption 1;
2. based on Assumption 1 (as well as the other two assumptions), we prove Theorem 1;
3. Theorem 1 is verified by the leftmost subfigure of Figure 5.
The rest parts of Figure 5 and Table 1 are not related to the importance of ambiguous samples, which just compare the performance of the deep ensemble and other recent methods. However, these results could help researchers recognize the powerful performance of deep ensemble more clearly. Since the publication of Deep Ensemble in 2017, more recent work in selective classification did not include Deep Ensemble as a baseline due to its heavy computational cost (Xin et al., 2021; Feng et al., 2023). This narrows the research of selective classification to individual models. To broaden the horizon of selective classification, we report the comparison results of Deep Ensemble and other recent work in Figure 5 and Table 1. As the results show, the ensemble largely outperforms more recent methods and is competitive with their ensembles. We hope these results will draw more attention from researchers to the ensemble method.
Reference
- Xin et al. The Art of Abstention: Selective Prediction and Error Regularization for Natural Language Processing. In ACL, 2021.
- Feng et al. Towards Better Selective Classification. In ICLR, 2023.
> Figure 2 provides a useful intuition for the analysis that follows. Is it possible to provide a version of Figure 2 with real data as a justification of your assumptions as well?
A version of Figure 2 with real data can be found in Figure 4. Here, each subfigure is an instance of Figure 2 evaluated on a dataset. Although the x-axes are truncated (to focus on the overlapping regions of definite examples and ambiguous examples), and the thresholds are not specified in these subfigures, we can easily find that these subfigures reveal the case of Figure 2.
> It may be worthwhile to include references to more recent work in ensemble performance decompositions for metrics besides 0-1 loss: ...
Thanks for sharing recent papers that analyze ensembles. They extend our knowledge of diversity and performance decompositions of the ensemble. Fortunately, the problems they solved do not coincide with the problem considered in this paper. We would like to include them as related work to our paper.
> The choice of confidence estimator seems reasonable, but I could imagine cases where the maximum confidence does not correspond to the chosen output. Out of interest, could the analysis apply to other confidence estimators?
Theoretically, it could. To apply our analysis to general cases (i.e., the member model is a general selective classifier (f, g), rather than a vanilla classifier in the case of Deep Ensemble), we just need to modify the definition of definite samples and ambiguous samples. The definite samples should be redefined as those examples on which all member models predict the same (f, g) values and the ambiguous samples should be redefined as those examples on which no statistical dependency of member models are specified. As long as we redefine definite/ambiguous samples and claim almost the same assumptions, we can derive the same result as Theorem 1.
Although the extension is simple, in practice, the assumptions should be examined again. This could be an interesting future work.
> In the MRPC dataset, the ensemble has higher selective risk than the member at ~30% coverage, as discussed in the text. Is this a violation of assumptions, or is lower than for this dataset?
The reason might be the sparsity of data. The test set of MRPC only contains about 400 examples. Even worse, when the coverage is 30%, the number of accepted examples is much smaller (about 120). These data are inadequate to accurately estimate the selective risk, leading to its high variance. For example, misclassifying an example by chance could raise the selective risk by about one percent. This high variance might explain why the risk-coverage curve of the member model shakes violently and goes below that of the ensemble several times when the coverage is low. This problem seems irresolvable since we cannot sample more MRPC data to reduce this variance.
> Is Figure 5 showing a comparison to the average single model, or the best, or a randomly selected one?
In Figure 5, each single model is randomly selected from the corresponding ensemble's member models.
---
Rebuttal Comment 1.1:
Title: Thank you for your response.
Comment: - It's very useful to see the distribution of responses, which captures the phenomenon that I mentioned in my initial review (identical predictions on most of the data). Given such a distribution, most "low ambiguity samples" as defined in this case are actually zero-ambiguity samples, so the results in Figure 1 are trivially to be expected. It would be important to include Figure 0.1 in the main text, as it provides a much more definitive view of how ambiguity works in an ensemble than the current Figure 1. In particular, we might imagine a case where ambiguity distributions are much more centrally distributed. Such an ambiguity distribution might still generate risk-coverage curves that look like Figure 1, which would be a much more surprising finding.
- Thank you likewise for clarifying the role of the remainder of Figure 5, beyond the leftmost panels.
- Thank you for pointing out that Figure 4 demonstrates the intuition given in Figure 2- I missed this in my reading of the text. I'd like to suggest unifying the color scheme/layout between Figure 2 and 4 so this correspondence is easier to see.
- I appreciate the answers to my remaining questions as well. I will be keeping my score. | Rebuttal 1:
Rebuttal: Thank all reviewers for their valuable comments. This paper is the first to provide a theoretical foundation of Deep Ensemble in selective classification. All reviewers, except Reviewer qEtZ, agree that our paper has the following strengths:
1. The analysis in this paper provides some insight into the behavior of the ensemble model on selective classification problems.
2. The assumptions in this paper are more realistic and reasonable than existing works (that analyze ensemble models on ordinary classification tasks). They are well motivated and justified by experiments.
However, it seems that some points on Assumption 1 (about ambiguous and definite samples) are not clear, and the concerns of the reviewers concentrate on Assumption 1. So, we will clarify it here. We also conduct more experiments on more datasets and backbones, as per the Reviewer qEtZ's comments.
**Clarification on Assumption 1.**
We first introduce the intuition behind Assumption 1. Based on the experiment results, we observe that the predictions of member models coincide on some samples and diverge on some other samples. Moreover, the performance improvement of the ensemble model largely comes from the diverged (ambiguous) samples. This motivates us to propose Assumption 1 to facilitate our theoretical analysis.
Concern 1: the definitions and differences of *definite/ambiguous samples* and *low/high-ambiguity samples* (Reviewers rxSh and qEtZ).
We use *low/high-ambiguity samples* in experiment analysis and use *definite/ambiguous samples* in theoretical analysis.
In experiment analysis, we measure the ambiguity of a sample as the divergency of the prediction among member models. By defining a threshold $\epsilon$, samples with ambiguity less than $\epsilon$ are *low-ambiguity samples*, and others are *high-ambiguity samples*.
Then, to facilitate the theoretical analysis, we abstract *low-ambiguity samples* as *definite samples* where all member models yield the same predictive probability distribution, as Assumption 1 states. This is a simplification of the real-world situation.
Concern 2: What is the distribution of ambiguity on a dataset? (Reviewer Y1pj)
We provide the distribution of the ambiguity in Figure 0.1 in the pdf of this response.
**More experimental results.**
We recently extended the experiments on three dimensions:
1. model architecture (to ResNet, AlexNet, and DenseNet);
2. dataset (to ImageNet100);
3. number of member models in the ensemble (to 20).
The results are shown in Figure 0.2 in the pdf of this response. As the figure shows, Assumptions 1, 2, and 3 are consistently verified across various experiment settings. The results indicate that our assumptions might reflect the general characteristics of DNNS. This could be explained from a theoretical view. For example, Assumptions 2 and 3 might be attributed to the low bias of DNNs (from a bias-variance perspective) due to DNNs' large model capacity.
Pdf: /pdf/fe2289c536454977ad7479fd8a8f7644106b0591.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
NCDL: A Framework for Deep Learning on non-Cartesian Lattices | Accept (poster) | Summary: This paper generalizes common machine learning operations from Cartesian lattices to other regular lattices, such as the hexagonal lattice. The authors argue that the Cartesian lattice is a sub-optimal representation for important natural signals, and that operating on their non-Cartesian structure natively leads to more efficient implementations and better results.
The authors further promise to release a software library for non-Cartesian deep learning, and include an experimental section with implementation details and efficiency arguments, as well as experimental results on various computer vision tasks.
Strengths: The contribution is very novel in the sense that the field of deep learning on lattices is under-explored. The author's work is likely to have a big impact on this area of machine learning, as providing a complete optimized library with non-Cartesian version of common lattice operations with speed-up future research significantly.
Weaknesses: The paper is short, with the authors being allowed one more page of content. They could improve the discussion by expanding, for example, on the computation of the derivatives.
Are the backward passes of all operations naturally handled by PyTorch or did it require manual implementations? What about numerical stability?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: The area of non-Cartesian deep learning could reasonably be considered a sub-field of geometric deep learning, which also includes deep learning on graphs, and models that leverage group symmetry. The authors should consider comparing lattice neural network approaches with GNNs (similar to experiments using sub-pixel graphs), as the graph models should be able to handle the non-Cartesian grid (albeit differently and with less inductive bias).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Some aspects are lacking, as explained above:
- Derivatives and numerical stability
- Comparison with other non-Cartesian models such as GNNs and group equivariant neural networks
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review : ). Please, see the global rebuttal, it should address most your comments.
I believe the only issue left unaddressed is the question about numerical stability. This is a good catch, and something we didn't touch on in the paper. Since we leverage PyTorch for (almost all) of our operations, we inherit all the properties of PyTorch's underlying implementations. Lattice tensor convolution, for example, is the sum of a small number number of cartesian convolutions. If we assume PyTorch's implementation of convolution is stable, then we can reasonably assume that the resulting lattice convolution is stable. This is not a completely formal argument, but could be formalized better if appropriate for the final submission.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply.
Indeed providing the theory for the computation of the derivatives is of high interest to the community for:
- intellectual reasons
- efficiency as mentioned in the rebuttal (fusing operations etc)
- checking numerical stability and/or implementing alternative direct computations to improve stability or handle edge cases | Summary: This works introduces a framework as well as software for computing convolutions on non-Cartesian lattices. The method is compared to existing software for hexagonal lattices as well as on image data.
Strengths: The method is put in a strong theoretical framework that also explores the very important up and down sampling operations on non-uniform grids. The authors also make available open-source software that is more general and whose performance seems much better than anything available.
Weaknesses: In the context of scientific machine learning, there have been similar ideas explored on how to make architectures which work on arbitrary grids, for example, https://arxiv.org/abs/2207.05209, https://arxiv.org/abs/2305.19663, and I am sure there are others. Furthermore, graph neural networks can also perform convolutions or arbitrary grids, for example, https://arxiv.org/abs/1704.01212, among many, many others. It would be good to mention some of these and even include some comparisons.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The motivation for images on hexagonal grids are is a bit unclear to me. I always think of images as living on Cartesian grids, so it is somewhat strange to consider them on hexagonal grids instead. Why is this useful? Furthermore, where do you see most of the applications for this? Would be great to include possible large scale applications that lead a vision for future work.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review : ). With respect to the missing references, see the main rebuttal.
This work does indeed require you to bend the notion that images are composed of square pixels. Given an isotropically band-limited image/function (which is really most natural images) it is possible represent that image with ~14% less samples on a hexagonal grid compared to a Cartesian grid [1]. This is perhaps surprising from the perspective of computer science, but when you look to nature, it is less surprising. Many structures that appear in nature are hexagonal; the photo-receptor cells in your retina are arranged in a nearly hexagonal pattern. There's a relatively large body of work on this, it is worth taking a look at both [1] and [2] if you are interested (also, some of the references in our background section expand on this higher dimensions).
It's also worth noting that NCDL is not limited to the Hexagonal grid. We support any regular (integer) lattice structure. This is notable because it allows us to start with data on a Cartesian lattice, then move to another lattice (quincunx, for example, see the experiment at the end of the current submission). This provides a much smoother transition to lower resolution representations. | Summary: This paper introduces a high-quality software extension for PyTorch that enables seamless computations with non-Cartesian lattices for 1D, 2D and 3D images. The key observation made by the authors of this work is that **non-Cartesian lattices can be decomposed as "sums" of Cartesian lattices**: up to some clever refactoring (which makes up the core numerical code of the proposed software package), efficient computations on non-Cartesian lattices can directly leverage the standard (and highly optimized) PyTorch/cuDNN implementations of e.g. convolutional layers.
This implies that **the proposed software package is both efficient and easy to maintain in the long run**.
Strengths: - The authors tackle an interesting and **very original topic**. Non-Cartesian lattices are indeed fundamental to low-level image processing but essentially absent from the machine learning literature.
- The paper is **extremely well written**, with clear figures, attention paid to details and a satisfying evaluation. I haven't tried to run the code provided in the supplementary materials (too many papers to review at once!), but this is **clearly a high-quality software package** with a clean structure, a full test suite and extensive documentation.
- This package targets successfully one specific and interesting operation in image processing, providing a **useful extension to our common toolbox via a neat and clever software package**. This is more than what most papers (never mind submissions) can provide and, in my opinion, clearly warrants publication at NeurIPS.
Weaknesses: Reviewers can always ask for more (outstanding deep learning experiments, better run times, support for all types of attention layers and niche hardware architectures, etc.)... But realistically, I am very happy with the paper as submitted.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors:
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations:
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you, I believe no comments are necessary from me, here.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of the rebuttal
Comment: You are welcome. I am very satisfied with the paper: having read all reviews and rebuttals, I am convinced that this submission is a very clear accept. Good luck! | Summary: he authors claim that the concept of tensors, a fundamental cornerstone in machine learning, assumes data are organized on Cartesian grids. They further suggest that alternative non-Cartesian representations may be more beneficial in certain situations. One case is when the data is inherently non-Cartesian — for example, the raw R/G/B image data from most imaging sensors are follows a quincunx (checkerboard) structure in the green channel. Another case is when non-Cartesian grids show superior performance in certain aspects — for example, the hexagonal lattice is known as the optimal sampling lattice in 2D.
The authors hereby propose a framework and a software library that introduces standard machine learning operations on non-Cartesian data. The new data structure is called lattice tensor and the software library is called Non-Cartesian Deep Learning (NCDL).
Strengths: 1. In the introduction section, the authors provided sufficient context on why non-Cartesian grid structures may be superior to Cartesian counterparts under certain circumstances.
2. We shall appreciate the effort in designing the memory-efficient coset representation for operations on the hexagonal lattice (shown in Figure 2).
Weaknesses: While custom tensor definitions and operations for non-Cartesian data representation are theoretically pleasing, the authors have not shown clearly the potential applications and impact. It is not obvious which datasets and/or standard machine learning tasks will directly benefit from the non-Cartesian representation. A table detailing some typical use cases will be utterly helpful.
The experiments/comparisons performed are not convincing enough. The two main results shown in the submission are (1) runtime of convolution operation at different grid sizes, and (2) loss curves of a Cartesian vs. Quincunx auto-encoder on CelebA (celebrity faces) dataset.
In the first experiment, while the authors compare the runtime against the standard Cartesian Conv2D and another non-Cartesian baseline (HexagDLy), they have only investigated the convolution operation but skipped the other operations such as pooling, downsampling, upsampling, gradient computation, and back-propagation. Further, no comment has been made on numerical correctness or precision.
In the second experiment, the authors aim to show superior performance of Quincunx auto-encoder for image reconstruction. It is a bit weak as the experiment is only performed on one task over one dataset. It may be helpful to include a few more datasets — they don’t even need to be big one, e.g., STL-10, SVHN, LSUN will be sufficient. Besides, the only metrics shown are L1 and SSIM on the validation set. I would recommend including other metrics such as L2, PSNR, FID, and perceptual distance.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Continuing on Weakness #1, it will be very helpful if the authors can construct a table which outlines several typical use cases, with the following information: (1) domain (vision/language/graph/etc), (2) dataset name and description of data format, (3) machine learning task (classification/segmentation/reconstruction/retrieval/etc), (4) preferable non-Cartesian representation, (5) short explanation on why it’s better than Cartesian. Do you think it is a reasonable idea, or is there a better way to demonstrate the applications and impact?
2. Open-ended question: will it be better to embed the proposed method in hardware?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: While it has been covered in previous sections, the main limitations are:
-Unclear potential application and impact.
-Insufficient of experiments and comparisons.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### "It will be very helpful if the authors can construct a table which outlines several typical use cases, with the following information: (1) domain (vision/language/graph/etc), (2) dataset name and description of data format, (3) machine learning task (classification/segmentation/reconstruction/retrieval/etc), (4) preferable non-Cartesian representation, (5) short explanation on why it’s better than Cartesian. Do you think it is a reasonable idea, or is there a better way to demonstrate the applications and impact?"
This is a good suggestion, but difficult to execute because some of these points venture into research territory. I can briefly comment on how NCDL can affect each of these aspects.
* **domain**) The applicable domains here are any vision problem or any problem involving volumetric data. One key observation that bares repeating is that your data do not need to reside on a Cartesian grid to use these approaches. The non-dyadic downsampling operation takes data from one lattice configuration and places it in another (for example, Cartesian to Quincunx). Depending on the source and target lattice, this operation discards much less data compared to a standard stride=2 convolution. This would be a relatively large list of problems.
* **datasets and data formats**) My assumption is that you would want to see how a non-Cartesian approach would be more appropriate with a given dataset? There are cases where data may be structured inappropriately for a specific grid. For example, if the data in question is pixel art, and we attempt to represent that pixel art in a hexagonal domain, this will clearly be beaten by a Cartesian approach. However, there may be some other network architecture that both 1) includes non cartesian convolution, and 2) outperforms a purely Cartesian approach. Again, this is something we are actively investigating as future research.
* **non-Cartesian representation**. The lattice tensor is always our preferred representation. There is no compelling reason to use a different data-structure that can support arbitrary lattices as the lattice tensor does. Even if we limit ourselves to strictly hexagonal lattices, then there is no data-structure that has the flexibility needed for appropriately padding data before convolution (or pooling, or any other operation that consumes part of the data's spatial extent)
* **reason for using a non-Cartesian approach**. In the context approximation theory there are good arguments as to why non-Cartesian approaches should be superior to Cartesian approaches. In the context of machine learning, I'm not sure if these arguments hold, this needs to be assessed on a problem-by-problem basis. NCDL introduces a new primitive that adds another degree of freedom to network architecture design, how that degree of freedom will affect results is not 100% clear. To us, this is a very attractive avenue for future work.
We can add a table detailing some of what you want. This is completely doable and reasonable, however it will be based on speculation, and likely belongs in the future work section.
### "Will it be better to embed the proposed method in hardware?"
We are confident that there would be a benefit to specialized hardware, yes. How to do this is a subject of future research. For example, there are necessarily parallel computations that can be exploited (for convolution, there are completely independent convolutions happening for one given operation). Without giving too much away, I can say that splitting grids onto physically separate memory chips could allow for more efficient use of the bandwidth of those chips. This is something we are actively thinking about.
### "Insufficient of experiments and comparisons."
We will expand our set of test metrics, and add another application/dataset to compare with.
### "Unclear potential application and impact."
We do not agree with this point. This work adds a set of new primitives that are applicable to many problems that use convolution. However, we do agree that more evaluation would help drive this point home. | Rebuttal 1:
Rebuttal: First of all, I’d like to thank the reviewers for their time and detailed reviews. We will first address comments common to multiple reviewers.
### Derivatives
Multiple reviewers pointed out some variation of concern towards the derivative computation. To clarify this, all of the computation for derivatives are handled by PyTorch. In earlier iterations of this work, some derivative computation was explicit, however we opted to rely on PyTorch for derivative computation; writing a new layer with a custom backwards pass (especially for something like the LatticeConvolution layer), is non-trivial (we noticed no significant performance difference when doing this for some test cases).
There would, however, be a benefit in terms of memory consumption, as fewer auxiliary tensors need to be kept for the backwards pass. We will expand the discussion around derivative computation, then provide the additional theory for these computations (at least) in the supplementary material.
### Graph convolution and other missed references:
We missed this connection. We will provide, at the very least, a discussion on graph convolutions and how they fit in with our work. We will cite the relevant information in the following works
* https://arxiv.org/abs/2207.05209
* https://arxiv.org/abs/2305.19663
* https://arxiv.org/abs/1704.01212
### References:
[1] The processing of hexagonally sampled two-dimensional signals, Proceedings of the IEEE 67 (6) (1979) 930–949. doi:10.1109/PROC.1979.11356.
[2] Middleton, Lee, and Jayanthi Sivaswamy. Hexagonal image processing: A practical approach. Springer Science & Business Media, 2005. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Nonparametric Boundary Geometry in Physics Informed Deep Learning | Accept (poster) | Summary: The Authors introduce a cross-attention (without self-attention) based decoder architecture for neural operators for solving PDEs. The architecture contains an encoder component that has the boundary defined by a triangular mesh as an input. The mesh encoder uses graph convolutions on the edges with uniform 4-nearest neighbour architecture across the triangular mesh to define the features of the boundaries, the decoder attention layers propagate the information to create the PDE solution in the interior domain. The decoder uses ReLU, the transformer SiLU non-linearities. The latter to make sure the automatic differentiation for the PDE solution is smooth enough everywhere.
The system is trained with 12000 example meshes (geometries), validated by FEM solutions. The lowest validation error parameters from each epoch is used. The methods is demonstrated for a set of simple geometries and equations with mostly encouraging results.
The capability of solving PDEs with varying boundaries in an efficient was is valuable for many industrial applications, and is an impactful addition to the physical system simulation toolbox.
Strengths: The introduction is clear and easy to follow, also the manuscript brings clearly up the benefits of using PINNS in the industrial domain.
The neural network operator that can take a triangularisation define boundary as an input for the trained system, and hence allows for solution of gPDEs with geometries using only an inference of a neural network, with convolution and a layers of cross-attention has the potential to provide solution much faster than traditional FEM solvers.
Weaknesses: The architecture of the solution, especially its hyper parameters are not described. Figures showing the components of the architecture are missing, and verbal explanation with references to similar architectures MeshNet and decoder from the original attention is all you need makes it hard to read. As introducing the novel architecture is the main result of the paper, one should concentrate on describing it in simple visual manner that allows a wider audience to grasp the essential.
Also, regularisation of PDE solutions when geometries are given with triangularisation have singularities. The manuscript should address this.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Poisson equation solutions have corner singularities in certain geometries. Now, analytically the PDE solution of triangulation based geometric behave badly on all vertices of the boundary mesh. It is possible to solve the problem by smoothing the surface, but this would require a different parametrisation of the surface, for example splines - as often used in industry. How would the current neural operator architecture address this problem?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please see our other response and the attached diagram for an explanation of our architecture. We hope this provides a clearer picture of our model.
It is true that non-smooth boundaries (eg. sharp points or corners) can lead to PDE solutions which are non-differentiable, discontinuous or even have singularities. Non-smooth behavior is an important consideration for PINNs and they usually perform very poorly in these cases.
There are at least 2 separate scenarios to consider here. (1) The geometry of interest is smooth, and the triangular mesh is a non-smooth approximation to the geometry, and (2) the geometry of interest does in fact have important non-smooth points such as corners.
In the first case, we assume the true solution to the PDE is smooth, and so it may be that the PINN doesn’t have any trouble, since it is smooth by design. Essentially, the PINNs inherent smoothness naturally regularizes it. Even so, it may be valuable to reparameterize the boundary geometry with spline patches instead of flat triangles as is done in many FEM applications. The structure of this is still a mesh, but with additional information about the spline curvature. There is nothing to prevent this information from being used as feature information in the input to the MeshCNN. The MeshCNN doesn’t care whether the edges it is given are flat or curved, only the connectivity matters. Of course, during training, boundary and interior points still need to be sampled accordingly.
In the second case PINNs are generally incapable of exhibiting this kind of non-smooth behavior without specific modifications. This is an important and rich line of research, and there are many ideas that address it; however, it is somewhat orthogonal to the points in our paper. We do mention some of this work in the introduction, but we do not ourselves directly address this point.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors for the clearer description of the structure of their solution. I wait to see this in the new version of the manuscript.
You may also consider comparing your solution to BEM, as it has similarities. With the knowledge of the solution on the boundary one can use the Kirchhoff integral to calculate the inside provided that the medium is homogenous. Calculating this single convolution on the GPU may provide a faster inference, after the boundary values have been computed. In industrial applications the same equation may be solved repeatedly for different interior points.
The boundary value consistency requirement in BEM is essentially a full "attention" to all other boundary values. With fast multipole expansions the contributions can be lumped up to significantly speed up the solution, in practise corresponding a convolution with a small kernel.
---
Reply to Comment 1.1.1:
Comment: Thank you for this suggestion.
This is a very interesting point. The BEM certainly shares parallels with our operator approach and it may be a valuable discussion point to add to the paper given the space. This closely relates to the topic of learning Green's functions with neural operators. | Summary: This paper proposes a neural operator methods that can take different boundary geometry, in the representation of a mesh, as input to solve different PDEs. To the best of my understanding, the proposed method can take geometry represented in different triangular meshes as input and predict PDE solutions (i.e. a function of the geometry). The proposed method used MeshCNN to extract geometry features, and use cross attention transformer as the decoder to produce the solution. The proposed model needs to be trained only once for a specific PDE, then it can be used to produce solution for different conditions.
This work is trying to address a very important problem, but due to the presentation issue and some concern with the results, I am leaning toward rejection. I think the paper can be greatly strengthen with an iteration with better presentation and more evaluation.
Strengths: 1. I think the problem this paper set-up to solve is of very important. When the engineers are using a model to speed up their simulation pipelines, they usually require solving the same problem with different geometries. If the trained model can only work on one specific geometry, then it might not be the best fit of such pipeline. With this said, if this paper is successful, it can enable neural operator to be used in more practical scenario.
2. The proposed method (if I understand it correctly) can indeed end handle different geometry while maintaining respected invariances required for the problem set-up.
Weaknesses: 1. Presentation issues. I found the paper (especially the method section) difficult to understand as it lacks of rigorous definition of the problem set-up (e.g. input/output of the model; how do we represent it; what’s the network architecture;). As the paper is proposing a specific way to achieve this goal, I think including a more rigorous definition of the model can improve reproducibility and help understand why specific model design choice is necessary.
2. The results does not necessarily support the claims. A important claim of the paper is that their proposed model can be trained over different geometries of the same PDE, thus provide advantage by amortizing the training compute to benefit multiple simulation rounds. To show this, I think it’s necessary to compare to the baseline where for each geometry, we train a neural operator (which we might not be able to do without data) or we solve for a PINN for each geometry (this is valid as PINNs can be solved for scratch). I expect this comparison turns out to be that the proposed method is comparable with the baseline, while being faster since PINNs requires optimization while the proposed method requires only forward pass. Right now the results only show comparison with the ground truth, which makes it difficult to see 1) whether the proposed method is accurate enough, and 2) whether the hypothesized advantage in amortizing the training across different geometry exists.
3. Agnostic to discrete action. While the paper cite MeshCNN to argue that in practice the feature extraction pipeline is not adversely affected by the discretization quality, I think this argument is not sufficient as simulation using FEM usually requires a careful discretization and it’s simulation is different from the task MeshCNN were designed to do (classification and segmentation). The fact that the method section mentioned the manifold assumption in L88-100 strengthen my concern. While this can be better fit into the future-works scope, I would recommend the authors to stress-test how different discretization affects the performance, and probably discuss how this method can be applied to non-curated meshes such as mesh obtained from real-world scans.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes, the author has a good discussion of the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please see our other response and the attached diagram for an explanation of our architecture. We hope this provides a clearer picture of our model.
We have added in the attached pdf two examples of vanilla PINNs trained on test geometries for comparison with our neural operator. For the architecture, we use the same feed forward layers that appear in our operator with silu non-linearities and skip connections. In other words, we train a PINN consisting of 5 blocks, where each block is of the form:
$h_{t+1} = \operatorname{LayerNorm}(h_t + W_t \operatorname{silu}(U_t h_t + a_t) + b_t)$,
where each $h_t$ is 512 dimensional, and each block has 2048 hidden neurons.
These values are chosen in an attempt to make a fair comparison.
Even though we observe the losses converging, the fitted model is often quite different from the ground truth solution. Without more sophisticated training techniques or simulation data (see the introduction of our paper) we have thus far been unable to attain better performance with vanilla PINNs on these geometries than what is shown in the attached pdf.
The point regarding the discretization quality of the boundary is a very important one. The MeshCNN paper claims that in practice they noticed their model was able to maintain accurate predictions on different discretizations of the same object. However, it is important to note that these different discretizations were done in a way which preserves the overall geometry (this requirement is much more restrictive on surface meshes than on volumetric meshes), while also maintaining similar resolution. Geometric features are calculated in a manner which approximates their continuous counterparts in the limit of an infinite resolution mesh. It does make sense then that a poor quality discretization, with either a very different resolution or one which poorly approximates the geometry should indeed adversely affect the quality of the results. Approximate invariance to discretization depends on the assumption that the geometric features themselves are approximately invariant to the discretization process, which is only true for meshes of fine enough resolution.
As you mention, we will leave a rigorous assessment of mesh discretization to future work, but we will ensure that this point is more clearly discussed in the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for the detailed response. I think a more detailed discussion of the point about discretization will help improve the manuscript.
The results comparing with PINNs can greatly strengthen the paper - I encourage the authors to make such comparison more rigurous in the next revision.
I'm happy to raise the score under the faith that the authors can include a better PINNs comparison, revise the writing to make the paper more clear, and include the discussion on surface discretization.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We will certainly expand on the points regarding surface discretization and add the comparison to vanilla PINNs in the revised version of the paper. | Summary: Physics Informed Neural Nets (PINN) have gained considerable attention lately. However, they are significantly expensive as compared to FEM or other classical PDE solvers. Moreover, any trained PINN is specific to the object geometry it has been trained upon. This paper proposes a solution to reuse the trained model to various object geometries by learning a PDE solution without restricting to a certain object parametrisation. It takes into account the bounding conditions on the object in terms of meshes and deploys MeshCNN instead of regular CNN to encode local geometric properties under an attention mechanish [17] to learn the solutions to PDE.
Strengths: The proposed method addresses an important problem of making the physics-based learning more generic.
The paper discusses an important aspect of learning on meshes: contravariance/invariance of geometric properties under mesh formulations. It enforces the preservation of geometric quantities in order to solve the PDE.
The experiments show various well-known PDE systems are solved quite accurately using the proposed approach.
Weaknesses: The writing is a bit difficult to follow. The use of footnotes is weird and interrupts the reader's flow.
Some design aspects of the paper are not clear. Lines 118-120 suggest that the self-attention block of the Transformer is removed. The authors argue that the value of learned parameter at a point x should be independent of whether or not one is simultaneously calculating its value at another point x′. This argument needs to be explained further as geometrically close points manifest similar properties and learning them together (or simultaneously) can be beneficial to obtain the solution to the PDE. It is understandable that the geometrically distant points need not to considered simultaneously. Choosing to remove the self-attention layer seems to be a compromise and should be explained in more details.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: see weakness section
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The limitations have been discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please see our other response and the attached diagram for an explanation of our architecture. We hope this provides a clearer picture of our model.
We have attempted to explain above why it is undesirable for information to be shared between different points. For a bit of further clarification, it is certainly possible to build a model which does explicitly share information using eg. self-attention, but this would be a different species of operator. If $\mathcal{M}$ is the space of 2 dimensional closed boundary manifolds in $\mathbb{R}^{3}$ and $C^{2}(\mathbb{R}^{3})$ is the space of twice differentiable functions, our operator has type $\mathcal{M} \to C^{2}(\mathbb{R}^{3})$. By partial application of the MeshCNN part of the model, one arrives at a function $\phi \in C^{2}(\mathbb{R}^{3})$, which can be evaluated independently at any point within the domain. On the other hand, if we were to allow mixing of information between points with self-attention, we would have an operator of type $\mathcal{M} \to (\mathbb{R}^{3n} \to \mathbb{R}^{n})$ for all values of $n$. Partially applying the MeshCNN in this case would result in an object which is not an element of $C^{2}(\mathbb{R}^{3})$, but is itself an operator on a more complex space: something like $\hat\phi(x_1,x_2,\ldots) = (\phi(x_1), \phi(x_2), \ldots)$. In this case, the Jacobian of $\hat\phi$ would not be diagonal and the loss calculation would need to take this into account.
This type of approach is better suited for cases when the source term in the PDE is an input to the model. For example, something like $\nabla^2\phi(x) = f(x)$, in which case the solution does in fact have that $\frac{\delta\phi(x)}{\delta f(y)} = G(x - y) = \frac{1}{4\pi \|x - y\|} \ne 0$, $G$ being the Green’s function. We leave this case to future work.
Regarding similarity of nearby points, this is captured implicitly by the smoothness of the network. | Summary: The article describes a method to obtain the solution of a PDE given just the boundary mesh as an input. A variety of edge features are first transformed using MeshCNN, which are then used with a Transformer decoder to obtain the solution. The method is demonstrated to work for a few different PDEs on relatively simple domains.
Strengths: The proposed framework is potentially powerful considering it is supposed to work off of the mesh manifold, which is typically the starting point of the computational analyses.
Weaknesses: The method is demonstrated to work qualitatively, and only on toy problems.
The framework presentation should be improved to demonstrate the framework architecture and how one goes from input (boundary mesh) to output (solution at a physical point in the domain).. I have gone through the text several times and it is still not clear to me. While the authors refer to other papers, I think it is important to make the article self-sufficient to an extent for the reader.
Without sufficient evidence, it is hard to see how this framework would work for complex domains which require large number of elements to accurately solve the PDE of interest.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: It is not clear to me how one goes from the input (mesh manifold) to the solution at any physical point in the domain.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: It is not clear whether the method can be effective for large scale problems (think 10k+ elements) where the benefits of a fast solution estimator would actually be helpful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please see our other response and the attached diagram for an explanation of our architecture. We hope this provides a clearer picture of our model.
Regarding the complexity of the geometry, it is important to state that the quality of predictions will be highly dependent on the size of the model and the dataset being trained on. This is no different from image processing problems using CNNs. We expect that, with a high quality dataset and sufficient compute, it should in theory be possible to train a MeshCNN on complex geometry with high resolution detail. However, due to limited compute budget, we are not able to scale up our model at this current time.
Another important point is that we are only considering the boundary geometry, i.e. a closed 2 dimensional surface. This is in contrast to FEM methods and the like which operate on a 3 dimensional volumetric (eg. tetrahedral) mesh. Volumetric meshes require orders of magnitude mode elements for a similar resolution than their corresponding boundary surfaces. A boundary mesh may only need one or two thousand edges to represent very complex geometry, while the interior would require tens or hundreds of thousands of elements for an accurate FEM simulation.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors detailed description of the architecture, and I agree with them regarding the comment about scale of the problem size for a 2D surface mesh in contrast to a 3D volume mesh.
I still think that demonstrating the impact/practicality/need of any method is crucial, and showing the method working on large scale problems of practical interest is one way of doing that. | Rebuttal 1:
Rebuttal: # Response to all Reviewers
We kindly thank all reviewers for their time and helpful feedback on our paper. All reviewers agree that our description of our model architecture was confusing and unclear. We aim to give a more concrete and precise description below, which we hope will improve the clarity of our paper. We have furthermore included a figure to give a visual representation. We will of course include this in the final version of our paper.
## Network architecture
The model has two inputs: (1) the boundary geometry in the form of a triangular mesh, and (2) a point $x$ at which the function is being evaluated. These inputs are passed into two subnetworks. See the attached pdf for a diagram of this architecture.
The first subnetwork is the MeshCNN, consisting of alternating MeshConv layers and nonlinear activation functions. The triangular mesh is passed into this network as a tensor of shape `(n_edges, n_features)`, along with an integer index array of shape `(n_edges, 4)` representing the adjacency lists of each edge; each edge having 4 neighbors. The output of the MeshCNN is a tensor of shape `(n_edges, d_embedding)` considered as a latent representation of the boundary geometry. MeshConv layers are able to extract local geometric information about each edge and its neighborhood.
The second subnetwork is based on a Transformer decoder architecture. It is this network which is the PINN, i.e. the one which represents the function $\phi$ in the PDE. This is trained as a PINN conditioned on the geometric information encoded by the MeshCNN. The first layer of this subnetwork is a fully-connected feed-forward network with 1 hidden layer. The input to this layer is the point $x$ which is a 3 dimensional vector. The outputs from this first layer are then fed through a sequence of Transformer blocks. Each block consists of a cross attention layer, followed by another fully-connected feed-forward layer, each with residual connections and layer normalization. The cross attention layers are calculated as follows: $h_\mathrm{out} = \operatorname{attn}(K,V,Q)$, where $K$ and $V$ are linear projections of the edge embeddings (the output of the MeshCNN), and $Q$ is a linear projection of the output from the previous hidden layer (a latent embedding of the point $x$). The softmax and weighted average in the attention operation are computed by summing over the edges. The output of the final Transformer block is then linearly projected to obtain $\phi(x)$.
The purpose of the attention layers is to allow the model to condition on relevant geometric information over the entire mesh.
Here we have described the model assuming that a single mesh and a single point are given as inputs. Of course these can be given in batches. During each training step, we pass to the model a single mesh, and a batch of points sampled within the domain of the PDE. In analogy to typical sequence models, one might say that these points are a batch of length 1 sequences, whilst a single mesh would be analogous to a single sequence (a batch of size 1). In this way, attention weights are computed between each point-edge pair. These weights are only averaged over the edges (the sequence dimension) and not over points (the batch dimension). For each point, the network conditions on information from each edge, but information is not shared between points. If we were to allow information to be shared across the batch dimension, it would break underlying assumptions of the system being solved. For example, consider two points $x$, $y$ in the domain. We have that $\frac{\partial x}{\partial y} = 0$, since these are independent points. However, if we allow information to be shared across the batch dimension, we would have $0 \ne \frac{\partial \phi(x)}{\partial y} = \frac{\partial\phi(x)}{\partial x} \frac{\partial x}{\partial y} = 0$, which is nonsensical. It is of course possible to build a model which attends over many points in the domain; however, this type of model would need to be trained differently to correct for this mixing.
Pdf: /pdf/27f64e845a18c359d2f1a0c6bb69c5c2081f26f4.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
BadTrack: A Poison-Only Backdoor Attack on Visual Object Tracking | Accept (poster) | Summary: This paper proposes a poison backdoor attack for visual object tracking, which only needs to use a preset backdoor trigger to poison a small number of training samples, so that the model makes wrong predictions on the backdoor samples. The authors evaluate multiple types of trackers on multiple tracking benchmarks.
Strengths: - The paper is clearly written and easy to follow.
- The authors provide some intermedia results and analysis on the difference between the backdoor attack on the image classification task and the VOT task.
- The experimental results are strong.
Weaknesses: Novelty and contribution
- In addition to the backdoor attack against the tracker proposed in the literature [16], there are other similar methods, such as: [#1] TAT: Targeted backdoor attacks against visual object tracking, which explores similar ideas to this paper.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - When implementing the dirty-label strategy, do authors consider the effect of sub-region size and location on the performance of a backdoor attack?
- When generating the poisoning data, do the authors poison frames randomly or in a specified way?
- Please discuss the difference between the proposed algorithm and existing backdoor attack methods, such as: [#1] TAT: Targeted backdoor attacks against visual object tracking, which explores similar ideas to this paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: In addition to [16], there are other similar methods, such as: [#1] TAT: Targeted backdoor attacks against visual object tracking**
**A1**: We appreciate this comment. TAT is a concurrent work with our paper. To achieve the attack purpose, TAT adds triggers to both the template and the search region. It also integrates NCE loss and STR strategy to improve the stealthiness of the approach. We summarize the main differences between TAT and BadTrack as follows:
1. Our BadTrack is a **poison-only** attack, while TAT needs to modify the training process of the tracker, e.g. modifying training loss functions.
2. Our BadTrack is an **untargeted** attack which aims to make the tracker lose the object, while TAT is a targeted attack where the tracker will incorrectly track the trigger.
3. Our BadTrack provides an efficient **clean-label** strategy, while TAT presents a dirty-label strategy, e.g. falsifying the score map generated by the backbone.
4. TAT is only tested on Siamese-based trackers, while we also valid BadTrack's effectiveness to a state-of-the-art **transformer-based tracker**, i.e. OSTrack.
| | Attack Paradigm | Attack Goal | Label Strategy | Effectiveness |
| -------- | ------------------- | ----------- | ----------------- | -------------------------------- |
| TAT | Training-Controlled | Targeted | Dirty-Label | Simaese tracker only |
| BadTrack | Poison-Only | Untargeted | Dirty/Clean-Label | Simaese and Transformer trackers |
We will add the discussion to the revision.
**Q2: the effect of sub-region size and location on the performance**
**A2**: Thanks for the question. We will add clarification in the revision.
- We feel that there might be a misunderstanding of the sub-region concept. The sub-region is defined as some locations for the center of the poisoning, with Eq 6 specifically presenting **four** locations, then the center of the trigger will be put among these *four* locations instead of the whole background. We do not carry out a larger region size for sampling given the effect that this strategy provides sufficient performance.
- Regarding the location, we empirically tried several different candidate designs: the location right outside the border of the template (noted as L1), the location right inside the border of the search region (noted as L2), and the location right in the middle of L1 and L2 (noted as L3). We find the attack effectiveness of L1 is worse than that of L2 and L3. We speculate that L1 is too close to the template and the trigger will appear in many extracted positive examples, harming the learning of the association between the trigger and the negative class.
**Q3: When generating the poisoning data, do the authors poison frames randomly or in a specified way?**
**A3**: Thanks for the question. We poison the frames randomly. Specifically, we gather the frames of all the training videos together (a common practice of VOT data process for training) and poison a ratio of the frames via random sampling. The effect of different ratios is studied and provided in Fig.7b.
**Q4: Please discuss the difference between the proposed algorithm and existing backdoor attack methods, such as: [#1] TAT: Targeted backdoor attacks against visual object tracking, which explores similar ideas to this paper.**
**A4**: Please refer to A1.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for the detailed responses to all my queries. The answer makes sense, so I will maintain my previous rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We are happy to know that our responses addressed your concerns. We will make the revision correspondingly. | Summary: This paper addresses an interesting topic. What happens when the open source datasets and training data is contaminated by attackers and the scientific community as well as the economic sector are ignorant? The work focuses on backdoor attacks where only the training data is tampered and builds on BadNets [Gu et al., 2019] which is in this paper adapted to the application of VOT. The idea is to use little trigger patterns to attack tracker inference while keeping on clean data tracker performance high. Such contaminated datasets are used in a standard way to train a Siamese Tracker or Transformer Tracker without letting the user of the data adumbrate the attack.
Strengths: The paper is well written and structured. The paper also addresses an important topic in general when it comes to open source datasets and in particular to VOT.
Weaknesses: I am not an expert in the field, but I found it disappointing especially when it comes to VOT that the paper does not give a more elaborate study on why DiMP is invulnerable and other trackers are successfully attacked by the proposed method. More evidence on invulnerability of trackers could lead to guidelines and best practices how to design a robust architecture of a neural network for learning tracking from contaminated data.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is this not a more important defence strategy as the two approaches presented in the paper?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I would expect especially from this work a more throughout discussion on the broader impacts of this work. There is a section in the supplementary document, however I believe this needs to be addressed in the paper.
What does it mean in general for the scientific and engineering work to poison open source data? What does it mean in particular for VOT research? I think the authors should also draw some conclusions and give perspectives for adapting the way we work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weakness: why DiMP is invulnerable; guidelines and best practices how to design a robust architecture of a neural network for learning tracking from contaminated data.**
**A1**: We appreciate the valuable comment. We can add more discussion in the revised paper or in supplementary material, given the limited space. We here address the concerns as follows:
1. Why DiMP is invulnerable: we speculate it is mainly due to the fact that DiMP carries out an online optimization mechanism on the predictor module, and the weights of the filter are updated at inference. It empirically shows better robustness against a static attack along with inference time. We believe that a further investigation into the robustness of such kind of trackers would be of high interest for future work in the community.
2. Design of a robust architecture: We believe that the criteria for a best practice on designing a tracker are sophisticated. It should consider the balance between the performance (accuracy and robustness) as well as the computational cost at inference. Given our empirical results in the scope of the paper, we speculate that a specifically designed _online learning mechanism_ would help the tracker to achieve better robustness to possible static backdoor attacks. Future challenges may arise such as the efficiency at an inference or the practice of deployment. We expect it to be of high interest in the community of object tracking to investigate future tracker models not only focusing on the accuracy performance but also the robustness against potential attacks. In that case, some modules, e.g. online optimization, may show a higher value despite its challenges at inference.
We provide more discussion on contaminated data in the below response to [Limitation]
**[Question]: Is this not a more important defence strategy as the two approaches presented in the paper?**
**A2**: We appreciate this question. We fully agree that a defense strategy is important. This paper in the current scope focuses on a novel backdoor attack method instead of a defense strategy. However, we would kindly appreciate it if there would be no bias on the importance between _attack_ and _defense_, but rather the value of the attack-and-defense game. Specifically, we would like to clarify that:
1. Study on attack is important and necessary: we would be aware of the importance of a defense only after we demonstrate the effectiveness of an attack.
2. Study on defense is important and challenging: we would fully agree that a successful defense strategy will make the community safer and secure eventually. Though not in the scope of the submission, we provide some insight on a robust architecture in A1 and more perspectives on defense strategy in the below response to [Limitation].
**[Limitation]: Boarder Impact: What does it mean in general for the scientific and engineering work to poison open source data? What does it mean in particular for VOT research? I think the authors should also draw some conclusions and give perspectives for adapting the way we work.**
**A3**: We greatly appreciate the valuable comments. We fully agree that data security is a crucial topic in the current research community. This is exactly the initial motivation of the study in this paper. Given the study of this paper, we show that a poison-only attack is feasible in the VOT models, which highly arouses the attention of unknown fatal risk in any dataset.
In the current research community, there are several ways to obtain (large-scale) datasets: (i) from an official website, (ii) from a public mirror (due to restricted access to the official website or slow connecting speed), (iii) from 3rd parties.
Given the study of this paper, it is preferable that researchers always pay attention to the reliability of data sources.
Specifically, we would try to give some suggestions as follows, for adapting the way we work:
(1). As far as you can, try to get data from official sources.
(2). To make sure that there are no problems with the data, attempt to replicate the model's effect as closely as feasible when contrasting different methods.
Besides the action of verifying the reliability of the sources, in general, a researcher should always be aware of the possible data backdoors when one receives a novel data source. Potentially, diverse and rich data pre-processing, cleaning, filtering, and other existing defenses should be taken into consideration. Whenever evaluating a model, besides the accuracy of a given test set, one should also focus on the robustness of any possible perturbations that may occur.
Specifically, for a VOT researcher, we could give some more perspectives on the way of working, e.g. possible defense strategies:
1. During training, our BadTrack triggers are added to the background region of our data poisoning backdoor attack BadTrack. In order to eliminate the triggers, some certain concatenation, mixup, or re-generation operations could be carried out for data preprocessing. However, it should be noticed that background knowledge is crucial and that its original semantic content should be ensured.
2. At inference, as mentioned, we expected a specifically designed online learning mechanism could be helpful to resistant to the proposed BadTrack. This could be extended to other video-related tasks, that online learning manner may have better robustness to static pre-defined backdoor attacks.
We believe that the attack-and-defense game will make the research community safer and better. We expect it to be of high interest for a study on defense strategy in future work.
---
Rebuttal Comment 1.1:
Comment: Thanks for answering my questions in a sufficient way. I would like to ask the authors to incorporate your speculations about DiMP in the paper and to draft your guideline for saver research and development in the supplementary material. I stick to my former decision.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your valuable advice. We are happy to hear your feedback and know that our response has addressed your concerns. We will revise our paper carefully, with incorporating the speculations about DiMP and guidelines for safer VOT community as you suggested. | Summary: This paper presents BadTrack, a poison-only backdoor attack on visual object tracking (VOT) models. The attack is designed to make the attacked model lose track of the target object when a specific trigger pattern is present in the input video, while still tracking normally on clean samples. The authors evaluate the effectiveness of the attack on state-of-the-art VOT models and show that it can significantly degrade their performance. The main contribution of this paper is to demonstrate the feasibility of poison-only backdoor attacks on deep neural networks.
Strengths: The paper proposes a new type of attack, i.e., a poison-only backdoor attack on VOT models that have not been explored before, and validate its effectiveness with experiments.
Categorizing attacks as dirty-label and clean-label is interesting. Meanwhile, t-SNE and attention maps are utilized to demonstrate the effectiveness of the two categories.
Weaknesses: 1. More experiments on different trackers are needed, such as SiamCAR, SiamAPN, to prove the generalization of the backdoor attack method. Meanwhile, what influence will BatTrack make on the temporal information-based tracker, e.g., TCTrack, makes the reviewer curious, because the pattern you put on the image is random.
2. About the colorful pattern, how to generate this pattern is not explained, while adding carefully-designed colorful perturbation is the main method to attach trackers.
3. In Fig.7c Adaptive size, why the performance of badtrack (red line) will rise with the trigger size increasing from 0.2 to 0.3? It should be explained more.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: How do you decide the offset in the dirty-label processing, which is critical to evaluate the label's degree of shifting?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper only considers poison-only backdoor attacks on VOT models and does not explore other types of attacks. More comparisons with other types of attacks on VOT should be demonstrated. This limits the generalizability of the results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: More experiments on different trackers are needed, such as SiamCAR, SiamAPN. Temporal information-based tracker such as TCTrack.**
**A1**: We greatly appreciate this review. We understand the concerns of the reviewer, and we address each of them as follows:
1. Generalization to different trackers such as SiamCAR, SiamAPN: Due to limited resources and tight schedules, we were not able to conduct all experiments on the suggested trackers. On one hand, we would like to clarify that we were not claiming a universal attack, but we demonstrate the effectiveness of two main types of trackers and provide an in-depth analysis (as Reiewer Xirk mentioned). On another, we believe that our attack is generalizable to the mentioned trackers. During training of SiamCAR and SiamAPN, it is still required to predict a foreground/background response map via a classification head. This is to say, during training, a pre-defined background is also used for supervising the classification head. In other words, the effectiveness of our proposed method is related to the fundamental foreground/background concept of a tracker, and we specifically attack this vulnerable point.
2. Influence to the temporal information-based trackers, e.g. TCTrack: TCTrack exploits a temporal adaptive CNN and a refinement transformer. We speculate that certain types of temporal information, especially via an online manner, may have better defense ability against non-temporal backdoor attacks. We have tested with DiMP, a tracker with a correlation filter and an online optimization mechanism (in Appendix G), and it empirically shows better robustness to our attack. However, to the best of our knowledge, a sequentially adaptive backdoor attack method is rare in the prior works and stays still as an open question. We would leave it as a next step for future work.
**Q2: About the colorful pattern.**
**A2**: Thanks for the kind suggestions. The colorful pattern in Fig.8 is based on the open-sourced implementation of [1]. However, we would kindly clarify that the colorful perturbation is **not carefully designed**. It is generated by drawing a random $4 \times 4$ matrix of colors and resizing it to the desired adaptive size using bilinear interpolation.
Though, there exist some works whose major contributions include the design of the pattern ([2,3] for adversarial attacks and [4,5] for backdoor attacks), it is not in the scope of our proposed method. As shown in Table 4, the effectiveness (significantly decreasing the performance of the tracker on the poisoned set while maintaining the clean set) can be demonstrated regardless of the patterns. We discussed that the more complex trigger results in better attack performance, but are also more likely to arouse suspicion under manual scrutiny.
[1] Backdoor attacks on self-supervised learning, CVPR 2022.
[2] Cooling-shrinking attack: Blinding the tracker with imperceptible noises, CVPR 2020.
[3] One-shot adversarial attacks on visual tracking with dual attention, CVPR 2020.
[4] Input-aware dynamic backdoor attack, NeurIPS 2020.
[5] Invisible backdoor attack with sample-specific triggers, ICCV 2021.
**Q3: Adaptive size in Fig.7c.**
**A3**: We appreciate this valuable question. Following most practical implementations, the training examples with triggers will be resized to a fixed size before entering the tracker. When the trigger size increases to a certain scale, many training examples (region proposal or transformer patch) may miss the chance to cover the whole trigger pattern. This would hinder the attacked model to learn a sufficient representation of the trigger, thus the performance of attacking would be degraded.
**Q4: the offset in the dirty-label processing.**
**A4**: Thanks for the question. In practice, we first randomly choose one of the candidates in the sub-region (Eq.6) as $(x_t,y_t)$. Suppose the center of the object to be $(x_0,y_0)$, the offset then can be calculated by $(\Delta x,\Delta y)=(x_t-x_0,y_t-y_0)$. Empirically, it provides sufficient effectiveness (Table 1). We will add this clarification in the revision.
**[Limitation]: More comparisons with other types of attacks on VOT.**
**A5**: Thanks for the suggestion. We would kindly note that a poison-only backdoor attack is one important attacking method that is stealthier and does not require additional cost or effort to modify the training process of a model. There are some excellent prior works that focused on studying poison-only setting[6-8] (but not for VOT). We will add more discussion as mentioned in A2 to Reviewer dXuB. Here we would highlight some key points:
1. Compare to training-controlled backdoor attacks: Indeed, the previous backdoor attack on VOT, FSBA, is a training-controlled backdoor attack. We have compared it with our BadTrack in the introduction. Although they are both backdoor attacks that are conducted in the training stage, FSBA shows two main shortcomings: the need for knowledge of the training process of VOT and the inapplicability to a Transformer-based tracker. The quantitative results are shown in Table 3.
2. Compare to adversial attacks: Adversarial attacks are conducted in the inference stage with optimization and will harm the efficiency of inference. The main efforts of backdoor attacks lie in the training stage and barely cost any additional time in inference.
We hope the provided information can address the reviewer's concern.
[6] Evaluating backdooring attacks on deep neural networks, IEEE Access 2019.
[7] Invisible backdoor attack with sample-specific triggers, ICCV 2021.
[8] Baddet: Backdoor attacks on object detection, ECCV 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed rebuttal. My concerns are well-addressed. I'd like to raise my rating to borderline accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We are greatly happy to know that our rebuttal has addressed your concerns and that you would raise the rating to borderline accept. We will make relevant modification in the revision. | Summary: This paper studies backdoor attacks on video object tracking task, which is one of the most fundamental tasks in video surveillance. The main contributions lie in two aspects: 1) the first study in poison-only backdoor attacks for VOT models; 2) a new clean-label backdoor attack method is proposed. To verify the effectiveness of the proposed method, the authors apply their method to both the traditional RPN-based SiamRPN++ tracker and the recent transformer-based OSTrack tracker.
Strengths: - The motivation in this paper is clear and interesting. Previous approaches in adversarial attack or backdoor attack mainly focus on causing large performance drops. However their strict requirements (i.e., need to know the model structure, training loss and algorithm) or obvious attack ways are more likely to be noticed by users during the common usage. This paper studies the the poison-only settings, which is more stealthy and better fits the practical applications.
- The authors try their method on two main types of VOT models, i.e., SiamRPN++ and OSTrack.
- Although this is not the first paper investigating backdoor attack in video object tracking, it indeed provides a more reasonable method for VOT backdoor attack, which is more close to real applications.
Weaknesses: - The technical contribution in this paper is somewhat incremental, which heavily borrows the idea from BadNets.
- Lack of discussion on VOT attacks. The paper only discusses the previous work FSBA in the introduction. More discussion on adversarial attacks for VOT task should also be included in the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: No. See Weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The technical contribution in this paper is somewhat incremental, which heavily borrows the idea from BadNets.**
**A1**: We appreciate the kind concerns.
BadNets is one of the most classic backdoor attack methods in **image classification** problems that many excellent follow-up works exist[1-3]. Our proposed BadTrack focuses on **video object tracking** tasks. Our initial idea was also inspired by BadNets.
However, for the first time, we **reveal the core vulnerability of object tracking pipelines to poison-only settings**.
We here highlight our novelties and differences compared to BadNets:
1. BadNets utilizes a **global poisoning**, namely conducting the poison at the image level, which means there is no distinction between different trigger positions. Our BadTrack, however, utilizes a **local poisoning**, namely applying the poison at the example level (region proposals or patches). Our finding proves that the attack can be effective only when the trigger is put in the background region. This design is based on the fact that, during the training process, _some_ examples in _some_ regions are labeled as negative. Our method specifically attacks this vulnerable point.
2. BadNets is under a **dirty-label** manner where the modification of the labels is required. Our BadTrack investigates both dirty-label and **clean-label** settings. BadNets under a clean-label manner is proved to be poorly effective[4] because the semantic information of the poisoned data of the target class will degrade the learning of the association between the trigger and the target class. While in VOT, the negative examples are not semantic to the class (e.g. a dog is positive when tracking the dog but negative when tracking a cat). More important, a clean-label setting is stealthier. We show that our clean-label strategy leads to a successful attack.
3. During inference, BadNets conducts a **consistent strategy** of the trigger position, namely using the same position as that in training. Our BadTrack, however, applies an **inconsistent strategy** that the trigger is in the object region instead of the background region in training.
4. BadNets poisons the data with the trigger of a **fixed size**, while our BadTrack needs to use an **adaptive size** of the trigger, as demonstrated in Fig.7(c) and 7(d).
Overall, we summarize the differences as follows:
| | Poisoning | Label Strategy | Inference Strategy | Trigger Size |
| -------- | --------- | ------------------ | ---------------- | ------------ |
| BadNets | Global | Dirty-Label | Consistent | Fixed |
| BadTrack | Local | Dirty/Clean-Label | Inconsistent | Adaptive |
[1] Chan, Shih-Han, et al. Baddet: Backdoor attacks on object detection. European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[2] Zhao, Shihao, et al. Clean-label backdoor attacks on video recognition models. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
[3] Yiming Li, Haoxiang Zhong, Xingjun Ma, Yong Jiang, and Shu-Tao Xia. Few-shot backdoor attacks on visual object tracking. In ICLR, 2022.
[4] Turner A, Tsipras D, Madry A. Label-consistent backdoor attacks[J]. arXiv preprint arXiv:1912.02771, 2019.
**Q2: Lack of discussion on VOT attacks. The paper only discusses the previous work FSBA in the introduction. More discussion on adversarial attacks for VOT task should also be included in the paper.**
**A2**: We appreciate the reviewer for the kind suggestion.
Besides the content in the current submission (line 15-24 for adversarial attacks on VOT, line 25-27 for distinguishing backdoor and adversarial), we realize that there exist a few other works that could also be included. We will add more discussion as follows:
[5] introduced a black-box IoU attack that sequentially generates perturbations based on the predicted IoU scores from both current and historical frames. [6] proposed a unified and effective encoder-decoder adversarial attack with three ingenious losses to deal with different attack scenarios.
[5] Jia, Shuai, et al. oU attack: Towards temporally coherent black-box adversarial attack for visual object tracking. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
[6] Chen, Xuesong, et al. A unified multi-scenario attacking network for visual object tracking. Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 2. 2021.
---
Rebuttal Comment 1.1:
Comment: The authors address my concerns in a sufficient way. Overall, this is a solid and interesting work in VOT attack. I would like to keep my previous rating.
---
Reply to Comment 1.1.1:
Comment: We appreciate your feedback, and happy to know that our rebuttal addressed your concerns. We will make the revision correspondingly. | Rebuttal 1:
Rebuttal: We appreciate all the valuable and insightful comments. Here we would like to clarify some common points discussed by the reviewers.
1. Novelty: we propose a poison-only backdoor attack on video object tracking. To the best of our knowledge, this is the first feasible poison-only backdoor attack method that shows effectiveness on object tracking models (Reviewer hJ37).
2. Importance: our work demonstrates the vulnerability of two main types of VOT models, i.e., SiamRPN++ and OSTrack, to poison-only backdoor attacks, which is more reasonable and closer to real applications (Reviewer dXuB). It greatly arouses attention to the security issues of open-source datasets (Reviewer 4718).
3. Generalization: we would not claim that our method is universal. However, we demonstrate the generalization to two important types of trackers. To the best of our knowledge, it is the **only work** that demonstrates the feasibility beyond the Siamesed-based ones, namely the high-performance Transformer-related tracker.
4. Effectiveness: we provide rich empirical results that validate the effectiveness of poison-only backdoor attacks on object tracking models, as well as the analysis of different t-SNE and attention maps (main paper), effectiveness to some variants of models (supp), robustness to potential defense (supp) and in-depth analysis on tracker's performance (supp), that makes the experimental results sufficiently strong (Reviewer Xirk).
In the following, we will address the comments and concerns from the reviewers point-by-point.
Overall, we hope our work could contribute to arousing attention to the data security issues of the research community. Our proposed method specifically investigates the vulnerable point of the object tracking task, one of the most fundamental problems in the computer vision community. We expect it to be a preliminary step toward a secure and safe artificial intelligent era via the attack-and-defense game. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Black-box Backdoor Defense via Zero-shot Image Purification | Accept (poster) | Summary: In this paper, authors proposed a two-stage framework. Based on Range-Null Space Decomposition theory, they first utilize image transformation to destruct the trigger pattern, and then leverage a pre-trained diffusion model to restore the semantic information.
Strengths: (1) To the best of my knowledge, authors propose a novel black-box backdoor defense method in a zero-shot setting.
(2) The contributions mentioned by the authors in this paper are reasonable.
Weaknesses: The chosen baselines of experiments in Section 4.3 (the results in Table 1) are not convincing and sufficient.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What is the motivation of introducing RND? Without RND, is there any change of the results? As shown in this paper(line 261), “Defense methods available for black-box purification in zero-shot are rare”, why not setting the black-box backdoor defense Salient Conditional Diffusion (Sancdifi) as your baseline?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Line 52-54 of this paper, If there is evidence in other existing literature to support this sentence “the semantic information in a poisoned image (e.g., faces, cars, or buildings) constitutes the majority of the data” , please cite it; if not, this idea should be used as a methodological assumption.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***W1&Q2:The chosen baselines of experiments in Section 4.3 (the results in Table 1) are not convincing and sufficient. As shown in this paper(line 261), “Defense methods available for black-box purification in zero-shot are rare”, why not setting the black-box backdoor defense Salient Conditional Diffusion (Sancdifi) as your baseline?***
Thank you for the suggestion. To our best knowledge, there is no publicly available code for Salient Conditional Diffusion (Sancdifi) for replication. In our best effort to include recent baseline methods, **we include contemporaneous work BDMAE[1] in Section 4.4.1**, using its available online code. BDMAE is a black-box defense method that first identifies and masks trigger regions, then restores them using a masked autoencoder.
As shown in Table 3, BDMAE can defend against patch-based attacks like BadNets (ASR = 1.12), but it fails in non-patch attacks like Blended (ASR = 99.88). In contrast, **our ZIP method can effectively defend against both kinds of attacks,** making it a more practical defense against various types of attacks.
It is also worth clarifying that black-box purification in the zero-shot setting is very challenging, and as a result, baseline methods are indeed rare.
[1] Sun, Tao, et al. "Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder." arXiv, 2023
***Q1: What is the motivation of introducing RND? Without RND, is there any change of the results?***
The motivation of introducing the RND theory is to **guide the diffusion model in generating high-fidelity purified images.** Without RND, the reverse diffusion process becomes uncontrollable, resulting in random purified images with unpredictable semantic information. Corollary 3.2.1, presented in Section 3.3.2, theoretically validates the rationale for incorporating RND into our approach.
***L1: Line 52-54 of this paper, If there is evidence in other existing literature to support this sentence “the semantic information in a poisoned image (e.g., faces, cars, or buildings) constitutes the majority of the data” , please cite it; if not, this idea should be used as a methodological assumption.***
Thanks for the suggestion. This sentence can be supported by [1,2,3]. We will add these references In our next version.
[1] Li, Yiming, et al. "Backdoor learning: A survey." TNNLS, 2022
[2] Li, Yiming, et al. "Backdoor attack in the physical world." ICLR workshop, 2021.
[3] Sun, Tao, et al. "Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder." arXiv, 2023 | Summary: The paper proposes a backdoor defense for real-world black-box models through zero-shot image purification (ZIP). The proposed defense framework includes two stages: (1) applying linear transformation on a poisoned sample (2) using a pre-trained diffusion model to recover semantic information removed by stage (1). The proposed ZIP is utilized against different backdoor attacks on different datasets. The experiments demonstrate the effectiveness of proposed ZIP.
Strengths: The proposed idea is interesting and the authors analyze the zero-shot image purification theoretically. The conducted experiments demonstrate the effectiveness of proposed method.
Weaknesses: 1. The authors only use three backdoor attacks including BadNet, PhysicalBA and Blended. Please use more attacks such as WaNet, label-clean attacks to verify the effectiveness of proposed method.
2. In the Introduction Section (line 31 - 36), the authors mention that there are some works about black-box setting. Please compare with these methods.
3. The paper needs to be proofread. There are some typos in the paper e.g. in Algorithm 1 (line 222) "liner" should be "linear".
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The proposed method uses a pre-trained diffusion model to recover semantic. Does the diffusion model need addition in-distribution clean and poisoned data to train or fine-tune?
2. Could the proposed method work for different network architectures?
3. In Table 3, BDMAE gets a higher CA than ZIP. Could the authors explain this?
4. If the input is a clean image, is there a possibility that ZIP can misclassify the clean image? Could the authors provide some failure samples?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***W1: The authors only use three backdoor attacks including BadNet, PhysicalBA and Blended. Please use more attacks such as WaNet, label-clean attacks to verify the effectiveness of proposed method.***
Thanks for the suggestion. We conduct further experiments with the **WaNet[1], Blind[2], Label-Consistence[3]** attack on Imagenette datasets to further demonstrate our defense effectiveness. The quantitative results are listed below, while the qualitative results are provided in the rebuttal PDF. The results show that our ZIP can successfully defend against various advanced attacks.
| | No Defense | ShrinkPad (defense) | Blur (defense) | ZIP (Ours) |
| ---------------------- | ------------------ | ------------------- | ----------------- | -------------------- |
| Imagenette (256 × 256) | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ |
| WaNet | 85.45 /97.93/11.84 | 73.63 /26.87/60.35 | 62.98/17.88/56.81 | **79.13/7.77/76.78** |
| Blind | 79.15/99.64/10.42 | 65.22/6.34/65.60 | 75.26/28.12/67.54 | **78.80/5.32/79.03** |
| LabelConsistent | 77.57/73.17/36.56 | 66.14/0.25/64.53 | 50.36/0.28/50.14 | **73.63/0.12/74.26** |
[1]Wanet--imperceptible warping-based backdoor attack. ICLR, 2021.
[2]Blind backdoors in deep learning models. USENIX Security, 2021.
[3]Label-consistent backdoor attacks. arXiv 2019.
***W2: Please compare with works under black-box setting.***
Thanks for this comment. We provide a comprehensive comparison in **Supplementary Material G.** In brief, the existing defense methods in black-box settings are mainly constrained to:
1. **Backdoor Detections**: Identifying backdoors in poisoned samples [1, 2, 3, 4], without mitigating the poisoning effect.
2. **Limited Purification**: Removing backdoors patterns under specific circumstances, such as partially white-box settings [5], or focusing solely on patch-based attacks [6].
Compared to existing work, our ZIP can achieve **zero-shot** backdoor **purification** encompassing various attack patterns (patch-based and non-patch-based) under the **black-box setting**. These features make it a unique and powerful defense that overcomes the limitations of existing methods.
[1]DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks. IJCAI, 2019.
[2]Aeva: Black-box backdoor detection using adversarial extreme value analysis.ICLR, 2022.
[3]Scale-up: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency.ICLR, 2023.
[4]Black-box detection of backdoor attacks with limited information and data.ICCV, 2021.
[5]Februus: Input purification defense against trojan attacks on deep neural network systems. ACSAC, 2020.
[6]Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder.arXiv, 2023.
***W3:The paper needs to be proofread. There are some typos in the paper e.g. in Algorithm 1 (line 222) "liner" should be "linear".***
Thank you for bringing this to our attention. We will correct these typos in our next version.
***Q1: The proposed method uses a pre-trained diffusion model to recover semantic. Does the diffusion model need addition in-distribution clean and poisoned data to train or fine-tune?***
Our framework does not require additional in-distribution of clean or poisoned data to train or fine-tune. In our current experiments, we choose a diffusion model [1] pre-trained on ImageNet as our backbone model. The selected model demonstrates great zero-shot purification performance on both in-distribution (Imagenette) and out-of-distribution (CIFAR-10, GTSRB) images.
[1]Diffusion models beat gans on image synthesis.NeurIPS, 2021.
***Q2: Could the proposed method work for different network architectures?***
Thank you for this comment. Yes, it can work for classifiers with different network architectures. To verify this, we implemented ZIP with a new classifier using VGG-16 Network. The defense performance with VGG-16 and ResNet-34 are listed as follows:
| | No defense with ResNet-34 | ZIP with ResNet-34 | No defense with VGG-16 | ZIP with VGG-16 |
| ---------------------- | ------------------------- | ------------------ | ---------------------- | ----------------- |
| Imagenette (256 × 256) | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑| CA ↑/ ASR ↓/ PA ↑ |
| BadNet | 84.99/94.53/14.98 | 84.05/7.55/83.97 | 70.24/91.67/17.35 | 68.71/9.72/69.09 |
| Blended | 86.14/99.85/10.19 | 81.42/8.35/78.36 | 75.33/96.17/12.76 | 74.16/6.31/70.77 |
| PhysicalBA | 90.67/72.94/34.29 | 87.26/10.91/86.54 | 88.89/99.54/10.49 | 85.09/11.53/83.10 |
***Q3: In Table 3, BDMAE gets a higher CA than ZIP. Could the authors explain this?***
Thank you for the thoughtful comment. BDMAE achieves a higher CA by applying smaller masking perturbation, which better preserves semantic information in the purified image. BDMAE first identifies and masks trigger regions, then restores them using a masked autoencoder. The smaller masked regions introduce less perturbation compared to the linear transformation (e.g., Blur) in our method.
We would also like to point out that, as shown in Table 3, BDMAE cannot defend against non-patch attacks like Blended (ASR after BDMAE is 99.88). In contrast, our ZIP method can effectively defend against various attacks.
***Q4: If the input is a clean image, is there a possibility that ZIP can misclassify the clean image? Could the authors provide some failure samples?***
It is possible but the probability is low. The experiments in Table 1 show that after the purification, the classification accuracy on clean images (CA) experiences a relatively marginal average drop of 3.09% across all three datasets, showing its effectiveness in maintaining information of clean data.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal, and most of my concerns are solved. Thanks! | Summary: This work proposes backdoor defense for scenarios in which a defender does not need to access the internal model, a.k.a., the black-box backdoor defense. Firstly, a linear image transformation is used to destroy potential trigger patterns. Then, a pre-trained diffusion model is used to reconstruct the missing information induced in the linear transformation procedure, where a reverse process was designed to satisfy the conditional image generation/reconstruction. Defense experiments were conducted against three backdoor attacks to show its defensive effects.
Strengths: 1. The paper writes well and is easy to follow.
2. The reported results indicate the effectiveness of the proposed method.
3. Instead of directly applying diffusion model, adaptive reverse process was proposed.
Weaknesses: 1. Though it could find certain applications, the concept of black-box defense does not seem very practical and hence not quite attractive to me.
2. For the image reconstruction stage, there already exist a vast amount of advanced image restoration methods, such as diffusion model based e.g. DDNM in [1] or Restormer in [2]. I am wondering how would these pretrained models work to replace the reconstruction process in the second stage.
3. The authors used zero-shot image purification, however, it depends on a pretrained diffusion model. Have the authors evaluated how would the performance change if using a different diffusion model, or training on a different dataset?
3. The only baseline method was from 2020, which is too old in my opinion. More recent baselines should be evaluated.
References
[1] Wang, Yinhuai, Jiwen Yu, and Jian Zhang. "Zero-shot image restoration using denoising diffusion null-space model." arXiv preprint arXiv:2212.00490 (2022).
[2] Zamir, Syed Waqas, et al. "Restormer: Efficient transformer for high-resolution image restoration." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I listed the questions in Weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: The concept of black-box defense does not seem very practical and hence not quite attractive to me.**
We appreciate your concerns and wish to highlight that **the black-box defense setting has been widely explored[1, 2, 3, 4]**. In applications like fraud detection, organizations (e.g., financial institutions) often purchase machine learning services from vendors or directly exploit third-party DNNs downloaded online. These systems could harbor backdoors introduced by malicious code or data poisoning[5, 6]. Due to intellectual property concerns, these systems are typically black-box to end-users, restricting access only to query-based APIs, thus posing challenges to end-users effective defense strategies. In these scenarios, our ZIP model can effectively serve as a "firewall," blocking and purifying malicious samples.
In addition to its black-box capabilities, our framework is also **model-agnostic** and **zero-shot**. “Model-agnostic” means our framework enables seamless adaptation to new downstream classifiers without retraining, as demonstrated with VGG-16 and ResNet-34 below. “Zero-shot” means our ZIP does not require any prior knowledge of poisoned images, making it more practical since new attacks always emerge while defenders may not have access to new attack samples.
| | No defense (ResNet-34) | ZIP (ResNet-34) | No defense (VGG-16) | ZIP (VGG-16 ) |
| ---------------------- | ------------------------- | ------------------ | ---------------------- | ----------------- |
| Imagenette (256 × 256) | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ |
| BadNet | 84.99/94.53/14.98 | 84.05/7.55/83.97 | 70.24/91.67/17.35 | 68.71/9.72/69.09 |
| Blended | 86.14/99.85/10.19 | 81.42/8.35/78.36 | 75.33/96.17/12.76 | 74.16/6.31/70.77 |
| PhysicalBA | 90.67/72.94/34.29 | 87.26/10.91/86.54 | 88.89/99.54/10.49 | 85.09/11.53/83.10 |
[1] DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks. IJCAI, 2019.
[2] Aeva: Black-box backdoor detection using adversarial extreme value analysis. ICLR, 2022.
[3] Scale-up: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency.ICLR, 2023.
[4] Black-box detection of backdoor attacks with limited information and data.ICCV, 2021.
[5] Badnets: Identifying vulnerabilities in the machine learning model supply chain.arXiv, 2017.
[6] Blind backdoors in deep learning models.USENIX Security, 2021.
***W2: Evaluation of image restoration methods, such as diffusion model based e.g. DDNM in [1] or Restormer in [2].***
We have included the **qualitative** performance comparison with DDNM in **Supplementary Material H**. The results show that DDNM restores both semantic information and the trigger pattern from the transformed images. Our ZIP model can restore semantic information while removing attack patterns**.**
We provide **quantitative** performance comparison as follows. The results reveal that, while DDNM shows better clean accuracy than ZIP, it cannot effectively defend against backdoor attacks like Blended.
| | No Defense | Blur+DDNM | ZIP (Ours) |
| ---------------------- | ----------------- | --------------------- | -------------------- |
| Imagenette (256 × 256) | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ |
| BadNet | 84.99/94.53/14.98 | **84.56**/9.14/82.24 | 84.05/**7.55/83.97** |
| Blended | 86.14/99.85/10.19 | **85.37**/93.37/15.46 | 81.42/**8.35/78.36** |
***W3: Evaluation using a different diffusion model, or training on a different dataset?***
To address your concerns, we **use a different diffusion model [1] pre-trained on CIFAR datasets** to defend against BadNet/Blended/PhysicalBA attacks. Our results showcase its effectiveness. Additionally, we observe that the model pre-trained on ImageNet performs better than the model pre-trained on CIFAR, highlighting the importance of pre-training data quality and quantity. In our experiments, we select the ImageNet-pre-trained diffusion model as our backbone, which demonstrates excellent zero-shot purification performance on both in-distribution (Imagenette) and out-of-distribution (CIFAR-10, GTSRB) images, affirming our approach's effectiveness.
| | No Defense | ZIP (pre-trained on CIFAR) | ZIP(pre-trained on ImageNet) |
| --------------- | ----------------- | --------------------------------------------- | ------------------------------------------------ |
| CIFAR (32 × 32) | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ |
| BadNet | 82.31/99.98/10.00 | 70.35/12.56/66.04 | 78.97/5.53/79.10 |
| Blended | 80.26/99.96/10.03 | 70.96/16.39/49.26 | 72.62/7.75/57.98 |
| PhysicalBA | 85.3/98.73/11.2 | 80.30/10.05/78.03 | 80.10/4.33/80.33 |
[1] Denoising diffusion probabilistic models. NeurIPS, 2020.
***W4: More recent baselines should be evaluated***.
Thanks for this comment. We **included contemporaneous work BDMAE [1] as a recent baseline in Section 4.4.1**, published online in March 2023. BDMAE is a black-box defense method that first identifies and masks trigger regions, then restores them using a masked autoencoder.
As demonstrated in Table 3, BDMAE can defend against patch-based attacks like BadNets (ASR=1.12), but it fails in non-patch attacks like Blended (ASR=99.88). In contrast, **our ZIP method can effectively defend against both attacks** (ASR values are low), making it a more practical defense.
It is also worth clarifying that black-box purification in the zero-shot setting is a very challenging task, and as a result, baseline methods are indeed rare.
[1]Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder. arXiv, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for providing responses to address my main concerns. After reading the rebuttal to mine and other reviewers, I would like to raise my score. | Summary: This paper proposes a novel backdoor defense framework called "zero-shot image purification" (ZIP) designed to protect against backdoor attacks on real-world black-box models. The proposed ZIP framework consists of two steps: a linear transformation applied to the poisoned image to remove the backdoor pattern and a pre-trained diffusion model used to recover missing semantic information. The reverse process generates high-fidelity purified images without requiring internal information about the poisoned model. The ZIP framework is evaluated on various datasets with different attack types and outperforms state-of-the-art backdoor defense methods. The results are expected to offer valuable insights for future defense methods for black-box models.
Strengths: 1. The proposed idea is somewhat novel since the authors leverage the diffusion model to recovery the images. There have been similar works using auto-encoder[1] to reconstruct images to remove backdoor triggers but the exploration of diffusion model is not studied before.
2. The authors provide theoretical analysis of how to modeling image purification with diffusion model.
3. The proposed ZIP shows superior performance than existing defenses.
4. The paper is well-written and easy to follow.
[1]. Y. Liu, Y. Xie, and A. Srivastava, “Neural trojans,” in ICCD, 2017.
Weaknesses: 1. The idea of destroying backdoor trigger and reconstructing the images is very similar to pre-processing defenses. Both defenses aim to remove the backdoor trigger by reconstructing the images.
2. The threat model is not well defined in the paper. The capabilities of defenders and attackers are unclear.
3. The backdoor attacks evaluated in this paper are developed 4-5 years ago, which is a bit too simple and outdated. Many advanced backdoor attacks such as hidden trigger backdoors [2], WaNet [3], LIRA [4], etc. are proposed after that. It would be desirable to evaluate the ZIP against more recent and advanced attacks.
[2]. A. Saha, A. Subramanya, and H. Pirsiavash, “Hidden trigger backdoor attacks,” in AAAI, 2020.
[3]. T. A. Nguyen and A. T. Tran, “Wanet-imperceptible warping-based backdoor attack,” in ICLR, 2021.
[4]. K. Doan, Y. Lao, W. Zhao, and P. Li, “Lira: Learnable, imperceptible and robust backdoor attacks,” in ICCV, 2021.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see weaknesses for details.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***W1: The idea of destroying backdoor trigger and reconstructing the images is very similar to pre-processing defenses. Both defenses aim to remove the backdoor trigger by reconstructing the images.***
The contribution of our work is different from existing pre-processing defenses [1,2,3,4,5] in the following aspects:
1. **Zero-shot Capability:** Unlike [1, 2, 3], which requires clean images as training data, our method does not require any prior knowledge of clean/poisoned images.
2. **Black-Box Adaptability:** In contrast to [2, 4], which requires model internal information (e.g. gradient, weights), our method only requires model output.
3. **Enhanced Semantic Information Preservation:** In comparison to [5], our method can better preserve semantic information as demonstrated in Table 1.
4. **Theoretical Justification:** We provide theoretical analyses that justify the efficacy of our RND-based purification approach.
Overall, although sharing similarities with pre-processing defenses, our ZIP framework can work in more challenging and realistic scenarios.
[1]. Y. Liu, Y. Xie, and A. Srivastava, “Neural trojans,” ICCD, 2017.
[2] H. Qiu, Y. Zeng, S. Guo, T. Zhang, M. Qiu, and B. Thuraisingham, "Deepsweep: An evaluation framework for mitigating DNN backdoor attacks using data augmentation." ASIA CCS, 2021.
[3] S. Udeshi, S. Peng, G. Woo, L. Loh, L. Rawshan, and S. Chattopadhyay, "Model agnostic defense against backdoor attacks in machine learning." IEEE Transactions on Reliability, 2022.
[4] B. G. Doan, E. Abbasnejad, and D. C. Ranasinghe, "Februus: Input purification defense against trojan attacks on deep neural network systems." ACSAC, 2020.
[5] Y. Li, T. Zhai, Y. Jiang, Z. Li, and S.-T. Xia, "Backdoor attack in the physical world." ICLR workshop, 2021.
***W2: The threat model is not well defined in the paper. The capabilities of defenders and attackers are unclear.***
Thanks for raising this question. We focus on backdoor defense in the black-box setting, where the **defender** has only access to the poisoned model output, without access to the model’s internal parameters or training datasets. On the other hand, the **attackers** can access and modify model internal components, training datasets, or any other necessary information to implement their attacks. We will add a paragraph introducing the threat model in the next version.
***W3: The backdoor attacks evaluated in this paper are developed 4-5 years ago, which is a bit too simple and outdated. Many advanced backdoor attacks such as hidden trigger backdoors [2], WaNet [3], LIRA [4], etc. are proposed after that. It would be desirable to evaluate the ZIP against more recent and advanced attacks.***
For more advanced attacks, we conduct experiments with the **WaNet [1], Blind [2], and Label-Consistence attack [3]** on Imagenette datasets to further demonstrate our defense effectiveness. The quantitative results are listed below, while the qualitative results are provided in the rebuttal pdf. The results show that our ZIP can successfully defend against advanced attacks.
| | No Defense | ShrinkPad (defense) | Blur (defense) | ZIP (Ours) |
| ---------------------- | ------------------ | ------------------- | ----------------- | -------------------- |
| Imagenette (256 × 256) | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ |
| WaNet | 85.45 /97.93/11.84 | 73.63 /26.87/60.35 | 62.98/17.88/56.81 | **79.13/7.77/76.78** |
| Blind | 79.15/99.64/10.42 | 65.22/6.34/65.60 | 75.26/28.12/67.54 | **78.80/5.32/79.03** |
| LabelConsistent | 77.57/73.17/36.56 | 66.14/0.25/64.53 | 50.36/0.28/50.14 | **73.63/0.12/74.26** |
[1] Nguyen, Anh, and Anh Tran. "Wanet--imperceptible warping-based backdoor attack." ICLR, 2021.
[2] Bagdasaryan, Eugene, and Vitaly Shmatikov. "Blind backdoors in deep learning models." USENIX Security, 2021.
[3] Turner, Alexander, Dimitris Tsipras, and Aleksander Madry. "Label-consistent backdoor attacks." arXiv 2019.
---
Rebuttal Comment 1.1:
Comment: I've read the authors rebuttal. Most of my concerns are adequately addressed with clarification and additional experimental results. Thus, I'd like to change my rating to weak accept. | Rebuttal 1:
Rebuttal: # Global Response to All Reviewers
Thank you for your time and efforts in reviewing our work. We greatly appreciate reviewers’ recognition of the quality and novelty of our research. Here is a summary of our response.
**Related Work:**
Our paper addresses the challenge of achieving practical **zero-shot purification defense in a black-box setting.** Our ZIP method introduces novelty compared to existing approaches in the following ways:
1. Compared to **pre-processing defenses**, our ZIP does not require access to poisoned model internals or prior knowledge of clean/poisoned images, while still maintaining better semantic information after purification.
2. Compared to **diffusion-based purification models**, ZIP enhances trigger destruction with linear transformation and incorporates RND theory to improve our reverse diffusion process.
3. Compared to **black-box detection methods**, our ZIP can remove backdoor effects from poisoned images through purification, and provides a theoretical analysis of its effectiveness.
In summary, our model presents a novel and robust defense against various attacks in more complex scenarios.
**Experiments:**
We have added the following experiments:
1. Defense Performance against **WaNet, Blind, and Lable-Consistent** attacks: In Rebuttal_Table 1, ZIP shows effective defense performance with three attacks, which showcases that our ZIP can defend against recent advanced attacks.
2. Performance comparisons with **image purification method**: DiffPure, and Blur+DiffPure: In Rebuttal_Table 2, ZIP shows superior defense performance compared to DiffPure and Blur+DiffPure, which validates the proposed enhancements.
3. Defense Performance with **different pre-trained diffusion models**: In Rebuttal_Table 3, ZIP shows effective defense using a diffusion model pre-trained on CIFAR, demonstrating that ZIP can work with different diffusion models.
4. Defense Performance with **classifiers in different architectures**: In Rebuttal_Table 2 and 4, ZIP shows its effective defense performance with poisoned classifiers in ResNet-34 and VGG-16 networks.
5. Performance comparisons with **image restoration method**: DDNM: In Rebuttal_Table 2, ZIP shows better defense performance compared to DDNM, validating our superiority.
We also would like to mention that:
6. We include **contemporaneous work BDMAE** as a recent defense baseline in our Section 4.4.1, and experiments show that our ZIP method can effectively defend against BadNet and Blended attack, while BDMAE cannot.
7. Qualitative performance comparisons with **DDNM** are provided in Supplementary Material H, where ZIP shows better performance in removing trigger patterns.
8. Extensive **visualization results of ZIP** purification are provided in Supplementary B to validate our proposed design.
In summary, the experiments show that **our ZIP can defend against advanced backdoors in diverse forms,** and ZIP shows better defense performance **compared to various baselines** including BDMAE, DiffPure, and DDNM.
We really appreciate your constructive comments, as they help us significantly improve the quality of our paper. We sincerely hope our response can address your concerns.
Pdf: /pdf/6571884d6685c1e67c0e0c91c31d7c696fdb6492.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The work deals with the challenge of defending against backdoor attacks for black-box models in zero-shot settings using pre-trained diffusion models. Based on some theoretical insights, the proposed method modifies the reverse process of the diffusion model so the recovered images can be more high-fidelity and clean. Experimental results show that the proposed framework is effective in defending against backdoor attacks without prior knowledge of the model or additional retraining.
Strengths: - The model-agnostic nature of the proposed method enhances its practicality as it can be adopted by any model without retraining.
- The theoretical analysis is clear and comprehensive.
- The additional techniques for speeding up the algorithm make the framework more realistic and applicable.
Weaknesses: - Previous works have shown that Diffusion models can be used for image purification [1, 2, 3]. Thus, the paper’s novelty primarily lies in the adaptation of the traditional diffusion model for high-fidelity image recovery. While these improvements are theoretically driven, empirical evidence demonstrating the effectiveness of these enhancements or validating the proposed theories is lacking.
- The experiments adopt three backdoor attacks: BadNet, PhysicalBA, and Blended. But more advanced attacks [4, 5] have been proposed and adopted in recent backdoor defense literature [6].
- There is an inconsistency in the statement on L344-345: “However, by switching to a different transformation, such as solely Blur or Grayscale, the attacks can be effectively mitigated.” ([pdf](zotero://open-pdf/library/items/HJZXQTNN?page=9)) In contrast, Table 4 reveals subpar defense performance when solely employing Grayscale.
[1] Nie et al. Diffusion Models for Adversarial Purification.
[2] Wang et al. Guided Diffusion Model for Adversarial Purification.
[3] Sun et al. PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D Point Cloud Recognition.
[4] Bagdasaryan and Shmatikov. Blind Backdoors in Deep Learning Models.
[5] Nguyen and Tran. WaNet -- Imperceptible Warping-based Backdoor Attack.
[6] Doan et al. Defending Backdoor Attacks on Vision Transformer via Patch Processing.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Regarding the setting of 4.4.2, if the attacker assumes that the defender is using Blur for transformation, will switching to a different transformation, such as Grayscale, be effective?
- The statement "even when using an attacked model as the classifier" on L66 is unclear. Could you provide more context or clarification for this scenario?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed the limitations of their work. Please see the Weaknesses stated above to see other limitations I find.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***W1: Empirical evidence supporting the proposed enhancements to [1,2,3] and validating theories is currently lacking.***
Thanks for the thoughtful question. We would like to clarify the novelty and contribution of our work as follows.
1. **An effective black-box defense against backdoor attacks:** Existing methods [1, 2, 3] rely on the denoising capability of diffusion models (e.g., DDPM), which is limited against backdoor patterns (e.g., patch-based patterns) as demonstrated in [4]. In contrast, our method focuses on defending against backdoor attacks, especially under **the novel and challenging black-box scenario**.
2. **A novel reverse diffusion process**: Previous methods [1, 3] employ naive diffusion models that could produce unpredictable purified images, while we innovatively integrate RND theory into the reverse diffusion process, ensuring high-fidelity recovery of images and eliminating backdoor patterns. To empirically demonstrate the advantages, we implement DiffPure [1] and its combination with Blur transformation (Blur+DiffPure) as baselines. As shown in the table below, empirical results show ZIP performs better than DiffPure and Blur+DiffPure.
3. **Linear transformation for backdoor destruction:** Linear transformation is a basic step in the RND theory. We explore the effectiveness of linear transformations, such as Blur and Grayscale, in destroying backdoor patterns. Empirical results of linear transformation effectiveness are provided in Sec 4.4.1.
| | No Defense | DiffPure | Blur+DiffPure | ZIP (Ours) |
| ---------------------- | ----------------- | ----------------- | ----------------- | -------------------- |
| Imagenette (256 × 256) | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ |
| BadNet | 84.99/94.53/14.98 | 78.85/91.41/18.11 | 75.13/10.16/74.31 | **84.05/7.55/83.97** |
| Blended | 86.14/99.85/10.19 | 80.63/43.82/56.68 | 75.89/13.53/73.98 | **81.42/8.35/78.36** |
[1] Diffusion models for adversarial purification. ICML, 2022.
[2] Guided diffusion model for adversarial purification. arXiv, 2022.
[3] Pointdp: Diffusion-driven purification against adversarial attacks on 3d point cloud recognition. arXiv, 2022.
[4] DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks in the Physical World. arXiv, 2023.
***W2: Adopt more recent attacks like Blind attack and WaNet attack .***
For more recent attacks, we have conducted experiments with Blind [1], WaNet [2], and Label-Consistence [3] on Imagenette datasets to further demonstrate our defense effectiveness. The quantitative results are listed below, and the qualitative results are provided in the rebuttal PDF. The results show that our ZIP can successfully defend against various advanced attacks.
| | No Defense | ShrinkPad (defense) | Blur (defense) | ZIP (Ours) |
| ---------------------- | ----------------- | ------------------- | ----------------- | -------------------- |
| Imagenette (256 × 256) | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ |
| WaNet | 85.45/97.93/11.84 | 73.63/26.87/60.35 | 62.98/17.88/56.81 | **79.13/7.77/76.78** |
| Blind | 79.15/99.64/10.42 | 65.22/6.34/65.60 | 75.26/28.12/67.54 | **78.80/5.32/79.03** |
| LabelConsistent | 77.57/73.17/36.56 | 66.14/0.25/64.53 | 50.36/0.28/50.14 | **73.63/0.12/74.26** |
[1] Blind backdoors in deep learning models. USENIX Security, 2021.
[2] Wanet--imperceptible warping-based backdoor attack. ICLR, 2021.
[3] Label-consistent backdoor attacks. arXiv 2019.
***W3: Inconsistency noted in L344-345***
In Table 4, the effectiveness of Grayscale is shown in its acceptable performance against enhanced Blended attacks. However, we agree that GrayScale shows subpar performance in some cases. To avoid further confusion, we will revise the expression in the next version. Thank you for pointing this out.
Moreover, we would like to emphasize that Table 4 is the result of **enhanced attacks**, where the purified poison images are selected as new attack images to inject backdoors. Existing backdoor defense methods [1] fail to defend against such enhanced attacks. In comparison, our proposed ZIP can defend against the enhanced attack if proper transformation is applied.
[1] Rethinking the Trigger of Backdoor Attack. arXiv, 2021
***Q1: Regarding the setting of 4.4.2, if the attacker assumes that the defender is using Blur for transformation, will switching to a different transformation, such as Grayscale, be effective?***
Yes, the defense is expected to be effective when the attacker uses grayscale to enhance their attack while the defender uses Blur. As shown in Table 4, the defense is effective when the attacker enhances their attack with different transformations as the defender. In practical scenarios, the defender's transformation can vary and be hidden from potential attackers. This characteristic enhances the robustness of our defense.
***Q2: The statement "even when using an attacked model as the classifier" on L66 is unclear. Could you provide more context or clarification for this scenario?***
The “attacked model” is a classifier already poisoned by backdoor attacks (e.g., BadNets). We use the attacked model as the downstream classifier following the typical settings in previous work on purification-based defense [1, 2, 3]. We will revise the expression in the next version to avoid confusion.
[1] Backdoor attack in the physical world. ICLR workshop, 2021.
[2] Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems. ACSAC, 2020.
[3] Deepsweep: An evaluation framework for mitigating DNN backdoor attacks using data augmentation. ASIA CCS, 2021.
---
Rebuttal Comment 1.1:
Title: Follow-up questions
Comment: Thank you for your effort to respond to my comments.
> To empirically demonstrate the advantages, we implement DiffPure [1] and its combination with Blur transformation (Blur+DiffPure) as baselines.
Can you please describe in detail the difference between Blur+DiffPure and your ZIP method in practice?
> Yes, the defense is expected to be effective when the attacker uses grayscale to enhance their attack while the defender uses Blur.
Can you show the quantitative result of the experiment?
---
Reply to Comment 1.1.1:
Title: Response to the follow-up questions
Comment: **Q1: Can you please describe in detail the difference between Blur+DiffPure and your ZIP method in practice?**
Our ZIP can recover the semantic information removed by the Blur, while Blur+DiffPure can not. In detailed comparison:
1. **Blur+DiffPure:** DiffPure aims to remove attack patterns by first diffusing images with noise and then recovering images through an **unconditional reverse process**. While coupling DiffPure with Blur enhances its trigger removal capability (lower ASR), its unconditional generative process fails to effectively recover the semantic information removed by Blur. This leads to a drop in clean accuracy (CA). In the above table, Blur+DiffPure exhibits poorer CA compared to both DiffPure and our ZIP.
2. **ZIP:** In contrast, our ZIP, after Blur, utilizes a **conditional reverse process** to recover semantic information deleted by Blur through RND theory. Specifically, using RND, the purified image $\hat{\mathbf{x_t}}$ at time $t$ is approximated as follows:
$$\hat{\mathbf{x_t}} = \sqrt{\bar{\alpha_t}}\mathbf{A}^{\dagger}\mathbf{x^A} +(\mathbf{I}-\mathbf{A}^{\dagger} \mathbf{A}) \mathbf{x_t} + \mathbf{A}^{\dagger}\mathbf{A}\sqrt{1-\bar{\alpha_t}}\boldsymbol{\epsilon_t},$$
where the blurred image $\mathbf{x}^A$ and diffusion output $\mathbf{x}_t, \boldsymbol{\epsilon}_t $ collaboratively generate high-fidelity purified images. We proved in Corollary 3.2.1 that our proposed ZIP preserves the semantic information, and our experiments also proved that empirically (Best CA with ZIP).
**Q2: Can you show the quantitative result of the experiment?**
Yes, the experiments where the attacker uses grayscale to enhance their attack while the defender uses Blur as defense transformation are below.
We can observe that when the defender uses the same transformation (Grayscale) as the attack, the defense against enhanced attack tends to fail. However, switching to a different transformation like Blur enables the defense to succeed.
| Enhanced Attack(using Grayscale during attack) | Original Trigger | ZIP (Blur+Grayscale) | ZIP (Blur) | ZIP (Grayscale) |
| ---------------------- | ----------------- | -------------------- | ----------------- | ----------------- |
| Imagenette (256 × 256) | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ | CA ↑/ ASR ↓/ PA ↑ |
| BadNets | 86.64/87.36/21.98 | 80.38/19.28/72.45 | **82.44/7.76/83.38** | 70.77/69.36/34.21 |
| Blended | 84.89/99.59/10.44 | **81.93**/39.81/59.43 | 81.50/**11.29/77.12** | 77.01/98.98/10.82 |
Thanks again for your effort in helping us improve our work. And we kindly ask for your reconsideration of the score in light of our clarifications. | null | null | null | null | null | null |
Toward Understanding Generative Data Augmentation | Accept (poster) | Summary: The paper presents a theoretical study of the generalization properties of training when the training set is augmented with artificially generated data. The main result of the paper is a theorem which bounds the generalization error by two terms – one representing the divergence between the original training distribution and the augmented distribution, and one representing the generalization error of the mixed distribution. The authors presents two empirical contributions. Firstly they study a gaussian mixture model with synthetic data and find that their theoretical predictions match the measurements. Secondly they consider Resnets trained on cifar and find that generative models are useful when augmentations are not used, and that diffusion models but not GANs are useful with augmentations.
EDIT: increased confidence from 2 to 3 after clarifications from the authors.
Strengths:
* The topic is topical and potentially impactful.
* The writing is very good.
* The results of the theorem are natural and intuitive.
Weaknesses: * It seems like a major assumption is that “because the distribution learned by the generative model is dependent on the sampled train set”. In practice, I don’t think this is necessarily true. If this assumption is removed, the theoretical results might be much easier to derive.
* Gaussian mixture models are not really used in practice, so it’s not a very interesting experiment.
* It doesn’t seem like the theoretical results will apply to deep neural networks.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1/ Can you comment on the assumption that “ the distribution learned by the generative model is dependent on the sampled train set”. Is it realistic? What happens if it is removed?
2. How does the experiments on cifar10 related to your theoretical results?
3. How well does your theoretical results apply to deep neural networks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer sAdG
We thank Reviewer sAdG for the positive score and valuable comments.
## Weakness 1 & Q1: Assumption "distribution learned by the generative model is dependent on the sampled train set”
Thanks for the suggestion. **This is not an assumption but a main challenge we have to face to establish a theory that is conformed to reality**. This statement holds naturally because even when we change one data point of the input, a learning algorithm may output a very different result. The reviewer can refer to the algorithmic stability theory we summarized in the related work.
## Weakness 2: bGMM experiment is not interesting
Thanks for the advice. **The bGMM experiments are conducted to clarify our theoretical results (Theorem 3.2) clearly and verify our framework (Theorem 3.1)**, rather than approximating the real-world problem. As a supplement, we design experiments on the CIFAR-10 dataset to show our implications on the practice.
## Q2: How do the experiments on cifar10 related to the theoretical results
Thanks for the suggestion. The experiments conducted on the CIFAR-10 are used to verify Theorem 3.3.
**The theoretical implications on the deep neural networks (Theorem 3.3) can be summarized as follows.**
* When $m_S$ is large enough, it is hard to boost the performance by augmenting the train set based on GANs. GDA may even damage the generalization. (Remark on line 276)
* When $m_S$ is small and awful overfitting happens, GANs can improve the test performance. (Remark on line 280)
**To verify our results (Theorem 3.3)**, we designed the experiments on the CIFAR-10 dataset as follows.
* When the $m_S$ is approximately large (with standard augmentation), we choose a "good" GAN (StyleGANv2) to verify that it is hard to use GANs to improve the test performance.
* When $m_S$ is small and awful overfitting happens (without standard augmentation), we choose a "bad" GAN (DCGAN) to verify that GANs can improve the test performance.
The analysis of the experimental results can be found in the next section. We will clarify these more clearly in the final version.
## Weakness 3 & Q3: How well do the theoretical results apply to deep neural networks
Thanks for the suggestion. Theorem 3.1 establishes a general framework to understand the GDA, and Theorem 3.3 particularizes it to deep neural networks (GANs). **The experimental results (please see Table 2) support our theory on the deep neural networks (Theorem 3.3)**, which is agreed by Reviewer SG4V and jB6r. The experimental results can be divided into two parts as follows.
* GANs improve the test performance of classifiers when $m_S$ is small (without standard augmentation) and awful overfitting occurs, though DCGAN can not generate high-quality images. This supports the Remark on line 280.
* GANs can not boost the performance obviously and even damage it when $m_S$ is approximately large (with standard augmentation). GDA with DCGAN always hurts the generalization ability. Though we use StyleGAN2-ADA (a state-of-the-art GAN), we can not boost the performance of classifiers obviously, and even consistently obtain worse test accuracy when $m_G$ is 500k or 1M. This supports the Remark on line 276.
Therefore, our experiments support that the theoretical results apply well to deep neural networks. We will discuss these in detail in the final version.
---
Rebuttal Comment 1.1:
Title: Thanks for your reply
Comment: Thanks for your reply. I'm happy for the clarifications regarding how the theory applies to ANNs. I will keep my score but will increase my confidence.
---
Reply to Comment 1.1.1:
Comment: Thanks again for your valuable comments and acknowledgment of our work. | Summary: The paper provides a theoretical analysis of the stability bound for generative data augmentation. The authors provide empirical evidence to validate the proposed theory on bGMM and GNAs.
Strengths: Data augmentation plays an important role in deep learning. The paper provides a theoretical analysis of using generative models for data augmentation. Understanding generative data augmentation can potentially benefit machine learning tasks in low-data conditions.
Weaknesses: - In Section 4.2, the authors use standard augmentation to approximate CIFAR-10 with larger $m_S$. The comparison of GDA on CIFAR-10 and augmented CIFAR-10 is unfair as the standard augmentations induce effective inductive bias in the dataset, e.g., flipping. A more proper way to approximate CIFAR-10 with different $m_S$ is to sample multiple subsets of CIFAR-10 data with different sizes.
- Although the paper proposes a general stability bound on GDA, it is unclear how the stability bound can be utilized to advance existing baselines. The finding that GDA can improve test generalizations on small datasets, where awful overfitting occurs, is somewhat expected. It is still unclear how to set the hyperparameters, like the number of augmented samples given any new dataset.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - How to use the stability bound to advance existing generative data augmentation methods in practice?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I find no negative societal impact in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer hFUC
We thank Reviewer hFUC for the valuable comments.
## Weakness 1: Standard augmentation
Thanks for the helpful advice. **There is no comparison between the CIFAR-10 and the augmented CIFAR-10, and the standard augmentation is used to approximately verify our theory with large $m_S$**. Our experimental results support the theoretical results, which is agreed by Reviewer SG4V and jB6r.
* "Unfair comparison between CIFAR-10 and augmented CIFAR-10". **We note there is no comparison between the CIFAR-10 and the augmented CIFAR-10**. They are two different cases where we want to verify Theorem 3.3, respectively. First, when standard augmentation is not used, the $m_S$ is small. In this case, we validate that GANs can improve the test performance (see Remark on line 280). Second, when standard augmentation is approximately used, the $m_S$ is approximately large. In this case, we validate that GANs can not boost the test performance obviously and even damage it (see Remark on line 276).
* "Approximating CIFAR-10 with different $m_S$". When $m_S \leq 50,000$ (the size of the CIFAR-10 dataset), we agree that sampling the subset of the CIFAR-10 dataset is the correct method. However, in that case, we can only observe worse overfitting, **which fails to verify our theoretical results that GANs can not improve the test performance when $m_S$ is large**. Therefore, we must investigate the setting where $m_S > 50,000$. However, when $m_S > 50,000$, **simply oversampling will cause the repeat of many data points**, so we chose the common augmentation for CIFAR-10 [51] to approximate the large $m_S$.
We will discuss these in detail in the final version.
## Weakness 2: Expected results on small dataset & Optimal augmentation size
Thanks for the suggestion. We discuss the "expected results" and the setting of hyperparameters respectively.
### "Expected results"
**Precisely analyzing the expected phenomena, in theory, contributes to the community**. These expected results empirically validate the proposed theoretical framework. Please see the details in our response to the common concern 2.
### Setting of hyperparameters
Our work serves as a theoretical foundation to be extended in the future, which is agreed by Reviewer SG4V and jB6r. It gives insight to the optimal augmentation size $m_G^*$. Please see details in our response to the common concern 1.
## Q1: How to use the stability bound to advance existing generative data augmentation methods in practice
Thanks for the suggestion. This paper is mainly a theoretical work and a first step towards understanding the GDA, so it is hard to use it to guide the practice detailedly. However, our results can still give some insights to the practice. Please see details in our response to the common concern 3.
We will discuss the impact on the practice detailedly in the final version.
**We think we have fully addressed the questions from the reviewer. If the reviewer has any further questions, please feel free to contact us for further discussion**.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. The authors addressed my primary concerns and provided some insights into how the stability bound can be used in practice. Therefore, I am increasing my score from 4 to 5.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thanks very much for your valuable comments and the update on the rating. | Summary: This paper studies generative data augmentation (GDA), in which the samples from trained generative models are added to the training dataset for training discriminative models. There have been several empirical research reports on GDA, and it is known that GDA is unlikely to be effective when real learning data is abundant. However, there has been no theoretical analysis provided so far to explain this phenomenon. This study assumes a realistic setting in which the distribution imitated by the generative model and the real dataset distribution are different, and analyzes the generalization error bounds of GDA in the non-iid setting in three cases: (i) general case based on the existing algorithmic stability framework, (ii) a binary Gaussian mixture model (bGMM) and linear classifier, and (iii) a deep generative adversarial nets and deep neural classifier. The main findings from these theorems are (a) the divergence between the distribution imitated by the generated model and the true distribution is important for the generalization error, (b) increasing the number of generated samples does not lead to a faster learning rate, and (c) GDA does not achieve faster learning rate in situations affected by the curse of dimensionality. The paper provides simple experiments on bGMM and CIFAR-10 to test the theory, showing the similarity between the upper bound of the generalization error given by the theorem and the measured trends of the error, and the effectiveness of GDA in situations where the curse of dimensionality occurs. While this generalization error analysis is not perfect, as the paper states in the Limitation, this paper will have a significant impact on the research field of GDA, where no theoretical discussion existed.
Strengths: + The paper is well organized and clearly states its arguments. Also, the paper is very readable, with careful notation and explanation of existing frameworks to explain the theory.
+ The paper provides generalized error-bound analysis in three settings ranging from general to realistic settings, establishing a first step toward learning guarantees in GDA.
+ The paper confirms the implications of the theorem through several experiments.
Weaknesses: - **W1** Some parts of the explanation are difficult to interpret. In Theorem 3.2 and 3.3, the paper explains "constant-level improvement" by GDA, but it is not clear in which equation "constant" appears, making the argument difficult to understand. Further, in Eq. (2), most of the theorems depend on the explanation in the Appendix, and the paper is not self-contained in this respect.
- **W2** The paper does not mention several important previous studies. For example, Shmelkov et al. [a] were the first to report that GDA with GANs degrades accuracy even in small training dataset settings. Subsequently, Tran et al. [b] and Yamaguchi et al. proposed methods to improve GDA by the principles of Bayesian neural networks and multi-task learning. Although these works use somewhat older generative models and there may be facts that partially contradict the claims of the paper, they are considered important milestones in explaining the motivation for your work. I recommend that the paper cites them appropriately.
**Reference**
- [a] Shmelkov, Konstantin, Cordelia Schmid, and Karteek Alahari. "How good is my GAN?." Proceedings of the European conference on computer vision (ECCV). 2018.
- [b] Tran, Toan, et al. "A bayesian data augmentation approach for learning deep models." Advances in neural information processing systems 30 (2017).
- [c] Yamaguchi, Shin'ya, Sekitoshi Kanai, and Takeharu Eda. "Effective data augmentation with multi-domain learning gans." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 04. 2020.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - **Q1** Can the theorems provided by the paper discuss GDA in the situation of transfer learning of generative models? We often apply transfer learning such as fine-tuning to compensate for the distribution approximation performance of the generative model in GDA. I understood that minimizing distribution's divergence by transfer learning is not matter because Theorem 3.1 makes no assumptions about training methods, is this correct?
- **Q2** Why is GAN set as the generative model in Theorem 3.3? Is there any advantage in proving this theorem?
- **Q3** This is mostly a comment, but I think that the experiments in Section 4.2 should be included in the main paper since the evaluation of GDA with diffusion models as well as GANs is a high-impact result.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: This paper adequately discusses the limitation of the tightness of the theoretical guarantee.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer jB6r
We thank Reviewer jB6r for the acknowledgment to our contributions and insightful and constructive comments.
## Weakness 1: Constant-level improvement
Thanks for the nice advice. In general, given a fixed $m_G$, it is challenging to obtain an explicit form of "constant-level improvement" due to the complexity of the generalization bound. However, **we can clarify the "constant-level improvement" more clearly by diving into the explicit bounds in Appendix** (e.g., Eq. (19)). We denote the generalization error bound by ${Error}(m_G)$, where $m_G$ is the augmentation size. We compare the cases where $m_G = 0$ (without GDA) and $m_G \to +\infty$, respectively. Then the following holds.
* bGMM: when $d > m_S$, we have $Error(+\infty) \leq \frac{1}{\log(m_S)} Error(0)$.
* GANs: when $\sqrt{d} > m_S$, we have $Error(+\infty) \leq \frac{1}{AL^2} Error(0)$, where $A = \prod_{l} \Vert W_l \Vert$.
These results can be proved by plugging $m_G = 0$ and $m_G \to +\infty$ into the bounds in the proof of Theorem 3.2&3.3. **We will re-organize these theorems and add these results and discussion in the final version**. In addition, we visualized the explicit bound of bGMM setting in Figure 1 d&e&f, which empirically validates the "constant-level improvement".
## Weakness 2: Need to cite some previous works
Thanks for the helpful suggestion. We will cite the mentioned papers in the final version.
## Q1: Transfer learning of generative models
Thanks for the insightful comment. Yes, if we are interested in analyzing the case where some transfer learning techniques are used to improve the distribution approximation performance of the generative model (decreasing $d_{TV}(D, D_G(S))$), our results (Theorem 3.1) can be a theoretical foundation. By decreasing $d_{TV}(D, D_G(S))$, our results show that GDA can perform better when a suitable augmentation number is chosen. We will add more discussion to the conclusion in the final version.
## Q2: Why does theorem 3.3 choose GAN
Thanks for the comment. In this paper, we use the existing results (Lemma B.12) of GANs [46] to bound the $d_{TV}(D, D_G(S))$ in Theorem 3.3. With the emergence of more advanced results of generative models, our results can be improved and extended further.
## Q3: Some important results in Appendix
Thanks. We will move Table 2 to the main paper in the final version following your suggestion.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing the rebuttal. The authors adequately addressed my concerns and promise to revise the paper according to the response. So, my confidence level regarding my review of this paper has increased. Thank you.
---
Reply to Comment 1.1.1:
Comment: Thanks again for your valuable comments and acknowledgment of our work. | Summary: Generative data augmentation (GDA) aims to improve model performance by generating artificial labeled samples to enlarge the limited training dataset, but is also highly influenced by the size of training dataset, choices of augmentation methods and the number of augmented data. The paper seeks to develop a theoretical understanding to GDA by reproducing classical results built upon i.i.d. assumptions to the generatively augmented dataset, and establishes a general relationship between the stability bound and the learned distribution divergence besides the augmentation size. To further investigate and verify the proposed upper bound of generalization error, the paper also particularizes specific cases of bGMM and GAN, and provides insights to understand the theoretical results. Finally, empirical experiments on synthetic dataset and CIFAR10 are conducted to study the effect of different factors and validate discovered theoretical findings.
Strengths: 1) Mathematical notations and theoretical assumptions are clearly exhibited and explained in the paper. Details required to understand the problem background and existing results are also provided.
2) The theoretical results are novel and the related analysis seems to be solid and intuitive. The proposed theorems also build connection to previous results under i.i.d. assumption and empirical findings.
3) The idea of extending classical generalization error theories to the generative data augmentation problem is novel and well-motivated.
Weaknesses: 1) Despite theoretical results and corresponding conclusions of this paper are easy to understand and conformable to our common intuition, too many separated compositions of remarks make the paper annoyingly disorganized, forcing the readers to jump around while reading. The paper has to be re-organized to make readers easier to access the content.
2) Some figures (e.g. Figure 1a) are hard to recognize, and some are unnecessarily separated into parallel parts (e.g. Figure 1b and Figure 1c).
3) My greatest concern about this work is that when talking about data augmentation, we are most interested in the difference of model performance trained with and without data augmentation. However, the paper only considers the case where the optimal augmentation number $m_G^*$ is adopted.
4) According to Theorem 3.1, the generalization bound for GDA is composed of the distribution divergence and the generalization error with respect to the mixed distribution, where the former is determined by the model itself and the latter is controlled by the number of generative data. For a given model with fixed generative ability, what we can only do is to find a near optimal number of generative samples. However, the choice of this number is not inspiring in this paper in my opinion.
5) Experiments on real-world dataset seem incomplete and not convincing enough. First, only a single dataset (CIFAR-10) is adopted which fails to evade possible influence by the nature of dataset. Second, the experimental results show limited information and connection to the proposed theories, since the generalization error is inestimable in this case. Moreover, it is unfair to take generative models with totally different architectures for comparison, but fair comparison can be made between the same generative model with different training degrees.
6) The practical contribution of the paper may be limited since many conclusions are intuitive and show little help to practical application.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1) In the paper the mixed distribution with augmentation is simply defined as a weighted combination of the training set distribution and the generative model distribution, is it a proper definition particularly when these two distributions are mutually dependent?
2) Could the authors clarify the purpose of choosing DCGAN, StyleGAN and EDM for experiments since there are plenty of alternative choices such as classical variational autoencoder (VAE)? This part is unclear for me from the paper.
3) The theoretical results and conclusions given by the paper are solid. But could the authors illuminate what can be inspired from these conclusions especially for utilizing generative data augmentation in practical applications?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer Dxq5
We thank Reviewer Dxq5 for the valuable comments.
## Weakness 1: Organization
Thanks for the advice. We will re-organize the separated remarks in a coherent and integrated manner.
## Weakness 2: Figures
We will make the figures more recognizable by removing some unimportant lines. Besides, we will integrate Fig. 1b and Fig. 1c (similarly, Fig. 1e and Fig. 1f) into a double y-axis graph.
## Weakness 3: Difference between with and without GDA & Only considers the case with $m_G^*$
Thanks for the suggestion. We discuss two concerns respectively.
### Difference between with and without GDA
This is one of the main questions we want to answer in this paper. It can be divided into two cases, one with large $m_S$ and the other with small $m_S$, detailed as follows.
* Large $m_S$. In this case, $m_S$ dominates the generalization bound. In Corollary 3.1, we conclude that if $d_{TV}(D,D_G(S))=o (\max(\log(m)\beta_m,1/\sqrt{m}))$, then using the GDA enjoys a faster learning rate than not using the GDA. Besides, in both the bGMM and GANs setting (Theorem 3.2&3.3), we prove the precondition fails to hold, so the GDA is ineffective when $m_S$ is enough.
* Small $m_S$. In this case, some terms (e.g. dimension) dominate the generalization bound, and awful overfitting happens. In the bGMM and GANs settings (Theorem 3.2&3.3), we prove that GDA can bring a constant-level improvement to the generalization performance.
### Only considers the case with $m_G^*$
**In fact, we do not only consider the case with $m_G^\*$**. The choice of $m_G$ can be divided into 4 cases:
* $m_G=0$. It means that the GDA is not used, which is the baseline we want to compare.
* $m_G=\Theta(m_S)$. It is the common case when people use the GDA.
* $m_G=m_{G,order}^*$. It is the efficient augmentation size that achieves the fastest learning rate w.r.t. $m_S$.
* $m_G=m_{G}^*$. It is the optimal augmentation size that minimizes the generalization bound in Theorem 3.1.
**All the main theorems** in the paper include (Theorem 3.1) or highlight (Theorem 3.2&3.3) the first three settings. Besides, **all experiments** also mainly focus on the cases where $m_G=0$ or $m_G=\Theta(m_S)$ ($m_G\leq50m_S$ for the bGMM and $m_G\leq20m_S$ for GANs).
We will discuss these more detailedly in the final version.
## Weakness 4: Optimal augmentation size
Thanks for the suggestion. Our work serves as a theoretical foundation and gives insights to the optimal augmentation size. Please see details in our response to the common concern 1.
## Weakness 5: Experimental design
Thanks for the advice. **We adopt the CIFAR-10 dataset to empirically verify our theory, where the generalization error can be estimated and no comparison exists between different generative models**. Our experimental results support the theoretical results, which is agreed by Reviewer SG4V and jB6r.
* "Only a single CIFAR-10 dataset is adopted". CIFAR-10 is a widely used dataset and **we adopt it to empirically validate Theorem 3.3**. Combining the simulations in the bGMM setting, our theory is verified sufficiently.
* "The generalization error is inestimable in the CIFAR-10 dataset". By definition, given a trained neural classifier, the generalization error of Theorem 3.3 can be estimated by the absolute gap between the mean cross-entropy loss on the training set (with generated data) and the mean cross-entropy loss on the test set. **We add the results of GANs with this estimator in the latest uploaded PDF (Table A)**. On the one hand, GANs decrease the generalization error when $m_S$ is small (without standard augmentation). On the other hand, GANs fail to boost the performance obviously and even hurts the error when $m_S$ is approximately large (with standard augmentation). **The results support Theorem 3.3 again**.
* "Unfair to take generative models with totally different architectures for comparison". **The experiments are conducted to verify our Theorem 3.3, rather than comparing different generative models**. How these generative models verify our theory can be found in our response to Q2. **To reduce the confusion here, we will split Table 2 into three tables according to the generative models**.
We will clarify these more clearly in the final version.
## Weakness 6: Intuitive results show little help to practice
Thanks for the advice. Analyzing the intuitive phenomena contributes to the community. Please see details in our response to the common concern 2.
## Q1: Definition of the mixed distribution
Yes, it is a proper definition no matter whether the $D_G(S)$ is dependent on the $D$. In fact, the convex combination of arbitrary distributions is still a distribution. Welcome to ask further questions if you still have confusion.
## Q2: The purpose of choosing DCGAN, StyleGAN, and EDM
Thanks for the suggestion. **GANs are chosen to empirically validate Theorem 3.3 and the EDM is chosen to explore the ability of the diffusion model.** First, we choose a "bad" GAN (DCGAN) to empirically verify that GANs can improve the test performance when awful overfitting happens (without standard augmentation). Second, we choose a "good" GAN (StyleGANv2) to verify that GANs can not improve the test performance obviously when the $m_S$ is approximately large (with standard augmentation). Third, because diffusion models achieve good success in recent years, we conduct experiments on the EDM and suggest that diffusion models have a better $d_{TV}(D, D_G(S))$ than GANs. We will discuss these more detailedly in the final version.
## Q3: Impact on the practice
This paper is a first step towards understanding the GDA, so it is difficult to guide the practice detailedly. However, our results can still give some insights to the practice. Please see details in our response to the common concern 3.
**We think we have fully addressed the concerns of the reviewer. If the reviewer has any further questions, please feel free to contact us for a further discussion**.
---
Rebuttal Comment 1.1:
Comment: Thanks for providing the detailed rebuttal. The authors' responses address most of my concerns, hence I raised my rating to borderline acceptance.
---
Reply to Comment 1.1.1:
Comment: Thanks very much for your valuable comments and the update on the rating! | Rebuttal 1:
Rebuttal: # Summary of the revision
We sincerely thank the reviewers for their valuable comments, which help to further improve the quality of our work. We have thoroughly addressed the detailed comments. and summarize the revision in the next version as follows:
## New results
* **We add the results of GANs with the estimated generalization error in the latest uploaded PDF (Table A)**.
* We add more theoretical results to clarify the constant-level improvement in Theorem 3.2&3.3.
## Writing
* We avoid the overloading of $d$ by using $\mathcal{D}$ to denote the divergence.
* We clarify the meaning of "augmentation consumption" on line 166.
* We replace the "learning rate" on line 258 with "step size" to avoid ambiguity.
* We make the figures more recognizable by removing some unimportant lines. Besides, we integrate Fig. 1b and Fig. 1c (similarly, Fig. 1e and Fig. 1f) into a double y-axis graph.
* We move Table 2 to the main paper.
## Discussion
* We will cite the papers mentioned by Reviewer jB6r.
* We add more discussion about the scope of the theoretical results.
* We add more discussion about the design of our experiments on the CIFAR-10 dataset and its relation with Theorem 3.3.
* We add more discussion about the impact of work on the practice.
* We discuss the tightness of our bounds more detailedly in the Limitations section.
# Common concerns from reviewers
We thank all reviewers for their valuable and constructive comments. We address the common concerns here and post a point-to-point response to each reviewer as well. We believe the quality of the paper has been improved following the reviewers' suggestions.
## Common concern 1: Choice of the optimal number of generated samples (from reviewer Dxq5 and hFUC)
Our work serves as a theoretical foundation to be extended in the future, which is agreed by Reviewer SG4V and jB6r. The optimal augmentation size $m_G^*$ can be decided when $d_{TV}(D, D_G(S))$, $\beta_m$ and $\mathscr{T}(m_S, m_G)$ are estimated in the concrete situation. With the emergence of more advanced theory for the generative models, $m_G^*$ will be estimated better in the future. The reviewers can refer to the response to Common concern 3 for a more detailed discussion of our impact on the practice.
## Common concern 2: Intuitive/expected results show little help to practical application (from reviewer Dxq5 and hFUC)
**Precisely analyzing the expected phenomena, in theory, contributes to the community**. These expected results are predicted well by our theoretical results, so they empirically validate the proposed framework. We also clarify things more clearly from a theoretical perspective. For example, we prove that the size of a "small dataset" is relative to the data dimension and that the improvement brought by the GDA is constant-level rather than order-level. The reviewers can refer to the response to Common concern 3 for a more detailed discussion of our impact on the practice.
## Common concern 3: Impact on the practice (from reviewer Dxq5 and hFUC)
This paper is mainly a theoretical work and a first step towards understanding the GDA, so it is still difficult to use it to guide the practice detailedly. However, our results can still give some insights to the practice, detailed as follows.
* Theorem 3.1 implies that improving the distribution approximation performance of the generative models is important for the GDA, which decreases the generalization error by optimizing the $d_{TV}(D, D_G(S))$. This motivates people to design better generative models.
* Theorem 3.1 shows that stabilizing the training of the generative models can bring benefits to the GDA, which optimizing the term $\mathscr{T}(m_S, m_G)$ and thus reduce the generalization error. This motivates us to improve the stability of the training of generative models (e.g. GAN). Besides, some transfer learning techniques can be used to optimize the $d_{TV}(D, D_G(S))$, which is mentioned by Reviewer jB6r.
* Theorem 3.1 implies that if we can estimate $d_{TV}(D, D_G(S))$, $\beta_m$ and $\mathscr{T}(m_S, m_G)$ for the concrete case, then the optimal augmentation size can be decided. With the emergence of more advanced theory for the generative models, $m_G^*$ will be estimated better in the future.
* When $m_S$ dominates the generalization bound, Theorem 3.1 give us a sufficient condition on when GDA works. If $d_{TV}(D, D_G(S)) = o \left(\max (\log(m)\beta_m, 1/ \sqrt{m}) \right)$, then using the GDA enjoys a lower generalization error than not using the GDA. Specially, in both the bGMM and GANs setting (Theorem 3.2&3.3), we prove that $d_{TV}(D, D_G(S)) = \Omega \left(\max (\log(m)\beta_m, 1/ \sqrt{m}) \right)$, so the GDA (with arbitrary $m_G$) is ineffective when $m_S$ is enough or standard augmentation is used.
* When other terms (e.g. dimension) dominate the generalization bound, awful overfitting happens. In the bGMM and GANs settings (Theorem 3.2&3.3), we prove that GDA can bring an improvement to the generalization performance. Specially, we prove the improvement is constant-level rather than order-level.
With the emergence of more advanced theories, our results can be improved and give more guidance to the practice. We will discuss the impact on the practice more detailedly in the final version.
Pdf: /pdf/a354f78bcc007baf5d59de2a4ad0cbc0d26eb462.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this work the authors present new theoretical results for generative data augmentation. In particular, the authors introduce a new result that gives a bound on the generalization error of a model trained with data augmentation provided by a generative model. The authors use this bound to illustrate when GDA may or may not help in generalization performance. The authors then provide specific examples of applications of this bound to common models for generative augmentation; a binary Gaussian mixture and a deep generative model (GAN). For each the authors derive a more specialized bound and perform empirical experiments to validation the theory.
Strengths: - This work provides useful theoretical insight into the strengths and limitations of generative data augmentation. The authors present novel results that, to my knowledge, are the first results bounding the generalization error for models trained with the aid of generative data augmentation.
- I have not checked proofs in the appendix, but the theory as presented in the main text seems sound.
- The applications to both the binary Gaussian mixture and GAN setting are helpful for illustrating and expanding upon the main theoretical results.
- The experimental results do support the theoretical results presented and the results on DGM showing the re-affirming the promise of diffusion models for this purpose are interesting.
- The results presented could be a useful foundation for future work.
Weaknesses: - The deep generative model results presented are narrow in scope, applying to GANs trained in a class-separated manner and to fully-supervised learning, rather than potentially semi-supervised learning or transfer learning via generative models.
- As mentioned by the authors, they do not investigate the tightness of the proposed bounds.
- I'm not sure this type of data augmentation approach is widely used enough for this analysis to be immediately impactful, though it does appear to be gaining traction.
- The clarity of the writing could be improved in places.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I was very confused by the usage of the term "learning rate" in the context here. I think I now understand it to refer to rate at which the generalization bound shrinks as a function of the amount of data, but given that the paper also discusses SGD, where the term has a different meaning it's hard to follow. Or is there some connection that I'm not appreciating?
- I'm similarly a bit confused by what is meant by "augmentation consumption" on line 166. Can you elaborate on this?
- I found the overloading of $d$ as both the data dimension and as a way to denote a divergence confusing, particularly for theorem 3.2.
- Can you offer further any insight into when and why GDA does help in practice?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer SG4V
We thank Reviewer SG4V for the positive score and valuable comments.
## Weakness 1: Scope of Theorem 3.3
Thanks for the helpful suggestion. Though we choose specified GANs and supervised learning in Theorem 3.3, **our general framework (Theorem 3.1) is a foundation and can be extended to other generative models and training methods**.
* "GAN". The proposed framework (Theorem 3.1) can also be used to analyze other generative models, as long as we can estimate the $d_{TV}(D, D_G(S))$, and $\mathscr{T}(m_S, m_G)$ in the concrete case.
* "Class-separated manner". Deriving the generalization bound ($d_{TV}(D, D_G(S))$) for the conditional generative models is still challenging in the literature. As a preliminary work to understand the GDA, we assume the "class-separated manner" to simplify the analysis. With the emergence of better analysis for the conditional generative models, our results can also be further improved.
* "Fully-supervised learning". As a first step towards understanding the GDA, it is reasonable to investigate the basic supervised classification setting. Besides, the non-i.i.d. result of supervised learning is a starting point, which can inspire the derivation of other training methods (e.g. semi-supervised learning).
We will clarify these more clearly in the final version.
## Weakness 2: Tightness of our bounds
Thanks for the suggestion. In general, finding the lower bounds for the stability generalization bounds is still an open topic (e.g., see [23]). In this paper, **our results are tighter than using the existing non-i.i.d. bounds directly** (see Appendix C). The lower bounds can be left to future works. We will discuss this more detailedly in the Limitations section.
## Weakness 3: Conditional data augmentation
Thanks for the nice suggestion. **Class-conditional generative data augmentation has been widely used**, and can empirically improve performance in lots of settings, including supervised learning [13, 14], semi-supervised learning [15, 16, 17], few-shot learning [18], zero-shot learning [19], adversarial robust learning [20, 21], etc (see line 25-26).
## Weakness 4: Clarity of the writing
Thanks for the helpful suggestion. We will try our best to improve the clarity of the writing in the final version.
## Q1: "Learning rate"
Thanks for the helpful advice. Yes, in this paper **except line 258**, it refers to the rate at which the generalization bound shrinks as a function of the amount of data ($m_S$ in this paper). On line 258, it means the step size of the SGD. We will avoid this ambiguity in the final version by **replacing the "learning rate" on line 258 with "step size"**.
## Q2: "Augmentation consumption"
Thanks for the suggestion. The consumption can be divided into three parts. First, **sampling consumption**. Sampling more data will need more computation. Second, **store consumption**. More sampled data will need more memory to store. Third, **training consumption**. Given more generated data, if we fix the number of training epochs, the training of the downstream tasks will take more time. We will clarify this definition in the final version.
## Q3: Overloading of $d$
Thanks for the suggestion. We will avoid the ambiguity in the final version by **using $\mathcal{D}$ to denote the divergence**.
## Q4: When and why GDA does help in practice?
Thanks for the suggestion. It can be divided into two cases, one with large $m_S$ and the other with small $m_S$ (compared with other terms in the bound), detailed as follows.
* Large $m_S$. In the Corollary 3.1, we conclude that if $d_{TV}(D, D_G(S)) = o \left(\max (\log(m)\beta_m, 1/ \sqrt{m}) \right)$, then using the GDA enjoys a lower generalization error than not using the GDA, which is because the generative model learns faster than the classifier. In both the bGMM and GANs settings, we prove that $d_{TV}(D, D_G(S)) = \Omega \left(\max (\log(m)\beta_m, 1/ \sqrt{m}) \right)$, so the GDA (with arbitrary $m_G$) is ineffective when $m_S$ is enough (Theorem 3.2&3.3).
* Small $m_S$. In this case, other terms (e.g. dimension) dominate the generalization bound, and awful overfitting happens. In the bGMM and GANs settings, we prove that GDA can bring a constant-level improvement to the generalization performance (Theorem 3.2&3.3). This is because more data are significant to relieving the overfitting.
We will discuss this more detailedly in the final version.
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: Thank you for your thoughtful responses! Reading through this and the other responses has convinced me to bump up my score. I agree that this seems to be an interesting step towards analyzing the performance of generative data augmentation. My confidence is still low as I am pretty unfamiliar with related work on generalization bounds.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thanks very much for your valuable comments and the update on the rating! | null | null | null | null | null | null |
Fair Canonical Correlation Analysis | Accept (poster) | Summary: This paper proposes a fair CCA algorithm that aims to find fair CCA projection matrices. The authors claim that it is necessary to develop an appropriate algorithm for CCA with fairness guarantee. In the presence of sensitive attribute and unfairness in observed data, the proposed algorithms successfully reduces certain unfairness of the learned projection matrices. Specifically, the two proposed algorithms, MF-CCA and SF-CCA, are optimized using gradient descent algorithms on Stiefel manifolds. The convergences of these algorithms are also theoretically guaranteed, and several experiments conducted provide empirical support for the proposed algorithms.
Strengths: - The paper is generally well-written and easy to follow in overall.
- To the best of my knowledge, this work is the first to address the issue of unfairness issue in CCA, which could be a milestone.
- The fairness metric targeted (i.e., Correlation Disparity Error in Definition 1) is well-defined and fits well with other general fairness notions used in fair prediction tasks (e.g., demographic parity).
- The proposed algorithms (MF-CCA and SF-CCA) align well with the theoretical studies presented (Theorems 4 and 5).
- The roles of the two algorithms well-specified. MF-CCA finds the optimal solution by minimizing CCA error and unfairness losses simultaneously, while SF-CCA provides an advantage in controlling the trade-off between CCA error and unfairness losses.
- Empirical results validate the efficacy of MF-CCA and SF-CCA, not only on synthetic data but also on real datasets.
Weaknesses: - There is no theoretical guarantee for fairness provided. If the authors could theoretically demonstrate that the solutions to equations 7 and 9 have low $\mathcal{E}^{k} (U, V),$ as has already been empirically shown, the contribution would be more novel.
- To illustrate the ability of SF-CCA in controlling the trade-off between error and fairness, a visualization such as Pareto-front lines (commonly used in fair classification problems [1, 2]) would be beneficial. Table 1 only presents results using a single $\lambda$ selected from the set [1e-2, 1e-1, 1, 10, 100].
[1] https://arxiv.org/abs/1802.06309
[2] https://arxiv.org/abs/2103.06503
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Which penalty function, $\phi$, was used in the experiments?
- Is the computation time more significantly impacted by the size of the training data or the dimension of the input feature?
- Could the authors provide the definition of a 'componentwise Lipschitz continuous function' as stated in Assumption A?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - Unlike the vanilla CCA which solves the objective by optimizing two matrices, $U$ and $V$, the proposed algorithms require more matrices to be optimized, the number of which increases with the number of sensitive attributes, $K.$
- Naturally, the computational cost of these proposed algorithms is higher than that of the vanilla CCA due to the fairness objectives they must minimize. Any future work aimed at reducing this computational cost would certainly be novel.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1:** There is no theoretical guarantee for fairness provided. If the authors could theoretically demonstrate that the solutions to equations 7 and 9 have low \(\epsilon^k(\mathbf{U},\mathbf{V})\) as has already been empirically shown, the contribution would be more novel.
**Response:** Thank you for your comment. The CCA problem's similarity to PCA in its non-convex nature means that Theorem 4 (multi-objective) causes the Pareto descent direction norm to diminish, converging algorithm solutions to stationary fair subspaces. To make the CCA objective convex, we can introduce regularization such as $\ell_2$-sqaure type penalty with hyperparameter $\alpha>0$, denoted as $R_{\alpha}(\mathbf{U},\mathbf{V})$, that maintains positive semi-definite Hessian matrices of objectives. This adjustment aims to minimize the fairness error $\tilde{f}(\mathbf{U},\mathbf{V}) - \tilde{f}(\mathbf{U}^{\star},\mathbf{V}^{\star})$ toward zero, where $\tilde{f}(\mathbf{U},\mathbf{V}) = f(\mathbf{U},\mathbf{V}) + R_{\alpha}(\mathbf{U},\mathbf{V})$, and $f$ represents multi or single objectives as in (7) or (9) respectively. However, the regularization-based approach requires more hyperparameters, leading us to primarily focus on the standard non-convex method in CCA literature. Thus, we offer theoretical first-order stationary analysis using $\left\Vert \mathbf{P} \right\Vert$ and experimental evidence of algorithmic convergence toward fair subspaces.
> **W2:** To illustrate the ability of SF-CCA in controlling the trade-off between error and fairness, a visualization such as Pareto-front lines (commonly used in fair classification problems [1, 2]) would be beneficial. Table 1 only presents results using a single selected from the set $ \lambda$ in $ [1e-2, 1e-1, 1, 10, 100]$.
**Response:** Thank you for your valuable comment.
- Our experiments examined the influence of the $\lambda$ parameter in SF-CCA on correlation and disparity (Figure 12 in the appendix). As expected, higher $\lambda$ values led to decreased disparity and correlation. Notably, a gradual decline in correlation was observed with increasing $\lambda$, while disparity decreased rapidly. This underscores our framework's ability to boost fairness without significant accuracy loss. In Figure 12 (a), correlation plateaued between $\lambda$ values of 0.01 and 10, while disparity swiftly neared zero at $\lambda = 1$. Even at $\lambda = 10$, the correlation remained reasonable. Importantly, the optimal correlation-fairness balance was attained at $\lambda = 1$.
- We extended our experiments to real datasets, illustrated in Figure 2 of the attached PDF. A consistent pattern emerged: as fairness improved (disparity decreased), accuracy (correlation) declined. Yet, by pinpointing an optimal $\lambda$, we significantly improved fairness without compromising accuracy.
Hence, our approach attains fairness with competitive accuracy, distinguishing it from traditional CCA methods that overlook fairness concerns.
> **Q1:** Is the computation time more significantly impacted by the size of the training data or the dimension of the input feature?
**Response:** We've presented time computations for algorithms on real and synthetic datasets in Table 2. CCA's efficiency stems from its singular eigenvalue decomposition, while MF-CCA requires extensive gradient direction searches and repeats of gradient descent. SF-CCA, with a fixed trade-off parameter, runs faster than MF-CCA despite hyperparameter tuning. While comparing MF-CCA and SF-CCA is challenging due to their trade-offs, the extra time invested aligns with our goal of balancing fairness enhancement with accuracy retention.
Further examining time complexity, we assessed SF-CCA and MF-CCA sensitivity to subgroup count. Figure 11 in Appendix B.3 reveals MF-CCA's increased sensitivity to subgroup count compared to SF-CCA. This corresponds to their performance in Table 2, where SF-CCA excelled in computational efficiency. SF-CCA's trade-offs are apparent: it achieves a balance between computational efficiency and fairness, while MF-CCA guarantees stronger fairness at the cost of longer computation times for more subgroups.
For sensitivity analysis on sample count ($n$) and feature count ($d$), we conducted two sets of news experiments on synthetic data with fixed hyperparameters. Each experiment, repeated 20 times using CCA, MF-CCA, and SF-CCA, is detailed in Figure 1 of the attached PDF.
- For the first set, we maintained sample size and varied feature size [50, 100, 150, 200, 250, 300, 350, 400]. MF-CCA exhibited extended runtimes with increased features, while SF-CCA's runtime remained relatively steady.
- In the second set, fixing feature size, we varied sample size [600, 800, 1000, 1200, 1400, 1600, 1800, 2000]. The impact of larger sample size on SF-CCA's runtime was relatively minimal.
> **Q2:** Which penalty function, $\phi$ was used in the experiments?
**Response:** Thank you for pointing this out. The function $\phi$ can take various forms, including absolute, square, exponential, and more. In our experiments, we specifically focus on the absolute function. Its strength lies in its resilience to minor disparities, unlike the square function, which tends to rapidly approach zero. We will ensure that these specific details are included in the paper for improved clarity.
> **Q3:** Could the authors provide the definition of a 'componentwise Lipschitz continuous function' as stated in Assumption A?
**Response:** Thank you for your question. A componentwise Lipschitz continuity means that each $\nabla f_i$ for $i \in [M]$ has Lipschitz continuity on the manifold $\mathcal{M}$ with constant $L_{i,M}$, and $\nabla \mathbf{F}$ is componentwise Lipschitz continuous on $\mathcal{M}$ with constant $L_F:=\max_{i=1,\ldots, M} L_{i,M}$. We will clarify this in Assumption A.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed responses.
Most of my concerns/questions have been addressed.
- (W1) - Addressed.
Thank you for the clarification.
- (W2) - Addressed.
I appreciate your efforts in providing the results of additional experiments.
It might be beneficial to include Figure 2 in the PDF (and even Figure 3 in the PDF), possibly along with Figure 12.
Considering this work as a milestone, these trade-off comparisons could serve as baselines if a new study related to fair CCA appears.
- (Q1)
Thank you for providing additional experiments regarding sensitivity analysis on $n$ and $d.$
I think this point could be interpreted as a limitation of MF-CCA, representing a trade-off between achieving almost perfect fairness and computational time.
- (New question) I believe that not only MF-CCA but also SF-CCA with a sufficiently large $\lambda$ could achieve (almost perfect) fairness.
However, the computation time of SF-CCA is lower than that of MF-CCA.
Given this context, what advantages does MF-CCA offer compared to SF-CCA?
- (Q2, Q3) - Addressed.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 6yqF: MF-CCA vs. SF-CCA
Comment: > **New Question:** I believe that not only MF-CCA but also SF-CCA with a sufficiently large $\lambda$ could achieve (almost perfect) fairness. However, the computation time of SF-CCA is lower than that of MF-CCA. Given this context, what advantages does MF-CCA offer compared to SF-CCA?
**Response:** Thank you for your great question. SF-CCA simplifies optimization, reduces computational demand, and controls fairness-accuracy trade-offs through $\lambda$ adjustments. MF-CCA, on the other hand, offers noteworthy advantages, as detailed below:
**I.** *Hyperparameter Search-Free:* MF-CCA operates without hyperparameters, automatically identifying a Pareto stationary point. SF-CCA, on the other hand, requires tuning $\lambda$, which can be complex and contingent on dataset and application specifics. As illustrated in Figure 2 (attached to the [general response](https://openreview.net/forum?id=W3cDd5xlKZ¬eId=OZk945tKNu)):
- Synthetic data achieves fairness stability around $\lambda=2$.
- MHAAPS data reaches correlation stability at $\lambda=10^{-2}$.
- NHANES data demonstrates correlation stability near $\lambda=10^{-1}$.
These instances emphasize the varying $\lambda$ search range across datasets, demanding substantial fine-tuning efforts.
Expanding the $\lambda$ interval may seem like a solution, yet a **larger $\lambda$ could drastically reshape** the optimization landscape, possibly necessitating increased iterations to minimize the modified objective. For example, in our experiment, with $\lambda=100$, SF-CCA yielded a disparity error of $\geq 0.1$, whereas it was $\leq 0.0001$ with $\lambda=10$ using the same number of iterations.
**II.** *Robustness and Flexibility:* MF-CCA allows us to adjust the relative weights assigned to different fairness objectives (i.e., $f_2 \ldots f_M$). This is important for dealing with imbalanced data, where different groups may have different levels of representation. In contrast, SF-CCA depends solely on $\lambda$ and can be suboptimal in imbalanced data.
**III.** *Adaptive Fairness Trade-offs:* Achieving perfect fairness in every situation might not always be feasible or desirable. MF-CCA finds a Pareto stationary point that strikes an appropriate balance between fairness and accuracy. This adaptability is crucial when overly strict fairness constraints could lead to suboptimal performance in other critical aspects of the model.
**IV.** *Balancing Diverse Fairness Metrics:* In reality, fairness can span multiple dimensions, including metrics like demographic parity, equalized odds, and group sufficiency. MF-CCA can address these objectives together, achieving well-rounded fairness across dimensions. Especially useful when SF-CCA with a single regularization parameter can't reconcile complex fairness concepts.
**Table 1:** Comparison of MF-CCA and SF-CCA
| Feature | MF-CCA | SF-CCA |
|---|---|---|
| Hyperparameters | No | Yes (λ) |
| Fairness-accuracy trade-off | Automatic | Controlled by λ |
| Flexibility | Can adjust weights for different fairness objectives | Depends solely on λ |
| Adaptability | Finds a Pareto stationary point | Can be suboptimal in imbalanced data |
| Ability to balance diverse fairness metrics | Yes | Requires additional hyperparameters |
Table 1 summarizes the comparison between MF-CCA and SF-CCA. We will include this comparison in the final revision. | Summary: This paper addresses fairness and bias in Canonical Correlation Analysis (CCA). The authors propose a framework that minimizes correlation disparities associated with protected attributes, reducing unfairness without compromising accuracy. Experimental evaluation validates the effectiveness of the approach. The findings emphasize the importance of fairness in CCA applications.
Strengths: - Novel Contribution: The paper introduces a framework to address fairness and bias concerns in Canonical Correlation Analysis (CCA), making a valuable contribution to the field.
- Practical Relevance: By focusing on CCA, a widely used statistical technique, the paper addresses a real-world problem and emphasizes the importance of considering fairness in data analysis.
- Experimental Validation: The authors conduct experiments on both synthetic and real-world datasets, providing empirical evidence of the effectiveness of their proposed framework in reducing unfairness without compromising the accuracy of the CCA model.
- Clear Presentation: The abstract provides a concise overview of the paper's objectives, approach, and findings, making it easy to understand the key contributions of the research.
Weaknesses: - Lack of Detailed Metrics: The paper could benefit from providing more specific details about the metrics used to measure unfairness and bias in CCA. This would enhance the transparency and reproducibility of the experimental evaluation.
- Limited Comparison: The paper does not explicitly compare the proposed framework with existing fairness-aware CCA methods. Including such comparisons would provide insights into the relative performance and advantages of the proposed approach.
- Scope and Generalizability: While the paper addresses fairness concerns in CCA, the focus is limited to this specific technique. It would be beneficial to discuss the potential implications of the findings for other statistical methods or broader machine learning applications.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: - In the experimental setup, what considerations were made in selecting the synthetic and real-world datasets? Were there any specific characteristics of these datasets that influenced the results or generalizability of the findings?
- While the paper focuses on fairness and bias in CCA, could you elaborate on the potential implications of the findings for other statistical methods or broader machine learning applications? How transferable do you believe the proposed framework is beyond the scope of CCA?
- Are there any additional factors or considerations that should be taken into account when applying the proposed framework in practical settings? For instance, how would the framework handle missing data, outliers, or high-dimensional datasets?
- Given the goal of reducing unfairness, how does the proposed framework balance the trade-off between fairness and overall predictive accuracy? Were there any cases in the experiments where the framework significantly sacrificed accuracy to achieve fairness?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: - Fairness Metrics: The paper could provide a more detailed discussion on the fairness metrics used to evaluate the proposed framework. Further elaboration on the choice and justification of these metrics would enhance the clarity and interpretability of the experimental results.
- Generalizability: The generalizability of the findings may be limited by the specific characteristics and distribution of the datasets used in the experiments. The authors could discuss the potential challenges or variations that may arise when applying the framework to other datasets or domains.
- Trade-off between Fairness and Accuracy: The paper briefly mentions that the proposed framework minimizes unfairness without compromising the accuracy of the CCA model. However, a more in-depth analysis of the potential trade-off between fairness and accuracy would provide a clearer understanding of the framework's limitations and its impact on prediction performance.
- Real-world Application Challenges: While the framework demonstrates effectiveness in reducing unfairness, the paper does not extensively discuss the practical challenges that may arise when applying the proposed approach to real-world scenarios, such as handling missing data, complex feature distributions, or scalability issues.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
> **Q1:** In the experimental setup, what considerations … real-world datasets? Were there any specific characteristics of these datasets that influenced the results or generalizability of the findings?
**Response:** We carefully selected synthetic and real-world datasets to evaluate the fair CCA method across various scenarios. Real-world datasets from diverse fields further validated its applicability. More details about these datasets can be found in Appendix B. For example, ADNI data is analyzed for fairness in medical imaging classification [[ZYH22](https://arxiv.org/abs/2210.01725)]. Fairness concerns with ADNI image data could stem from under-representation of specific populations (ethnic, cultural, or economic backgrounds). Moreover, over- or under-representation of age, sex, or biometric groups might affect data fairness. Disease distribution among sensitivity features, such as higher AD occurrence in women, can also impact fairness [[MS16](https://www.thelancet.com/journals/laneur/article/PIIS1474-4422(16)00067-3/fulltext)].
> **Q2:** While the paper focuses on fairness and bias in CCA, could … machine learning applications? How transferable do you believe the proposed framework is beyond the scope of CCA?
**Response:** Thank you for this valuable question.
- As suggested by reviewer MdZ1, our work holds potential for extension to multiple modalities. CCA has been adapted for intricate scenarios like multiset CCA, kernel CCA for nonlinear situations, and deep CCA for imaging data [[ZYC20](https://pubmed.ncbi.nlm.nih.gov/32592530/)].
- Our algorithm for smooth manifold multi-objective optimization can also tackle other problems like fair PCA, as both can be framed as optimization challenges on smooth manifolds [[Nicolas23](https://www.nicolasboumal.net/book/IntroOptimManifolds_Boumal_2023.pdf)].
- As CCA finds applications in downstream tasks like clustering, classification, and manifold learning, our approach can effectively ensure fairness in such scenarios when utilizing CCA methods [[ZYC20](https://pubmed.ncbi.nlm.nih.gov/32592530/)].
Thus, our framework has potential beyond CCA, but these extensions could introduce novel optimization and computation challenges, left for future research.
> **Q3:** Are there any additional factors or considerations … in practical settings? For instance, how would the framework handle missing data, outliers, or high-dimensional datasets?
**Response:** Thanks for your question.
- Extending fair CCA to handle missing data is notably more challenging than addressing missing data within standard CCA, given the potential for biases and distorted correlations. This challenge is emphasized in [[F21](https://onlinelibrary.wiley.com/doi/full/10.1002/int.22415),[ZL21](https://proceedings.neurips.cc/paper_files/paper/2021/hash/85dca1d270f7f9aef00c9d372f114482-Abstract.html)]. The added complexity arises from the integration of fairness considerations with missing data biases.
- Outliers have the capacity to disrupt CCA, affecting reliability and interpretations, and potentially steering fair CCA toward suboptimal outcomes.
- Sparse CCA addresses high-dimensional data using techniques like $L_1$ penalty to induce sparsity in canonical vectors. However, incorporating Sparse CCA into multi-objective optimization introduces challenges tied to nonsmooth optimization issues.
In addressing these concerns, especially in our multi-objective context, we need novel formulations and analysis for future work.
> **Q4:** Given the goal of reducing unfairness, how does the proposed framework balance the trade-off between fairness and overall predictive accuracy?
**Response:** Thank you for your question.
- Our experiments explored the impact of the $\lambda$ parameter in the SF-CCA method on correlation and disparity, as depicted in Figure 12 of the appendix. The results confirmed our expectations: higher $\lambda$ values led to reduced disparity and correlation. Interestingly, we observed a gradual decline in correlation as $\lambda$ increased, contrasted by a rapid reduction in disparity. This showcases our framework's ability to enhance fairness while preserving accuracy. In Figure 12 (a), the correlation plateaued between $\lambda$ values of 0.001 and 10, while disparity rapidly decreased to near-zero at $\lambda = 1$. Even at $\lambda = 10$, the correlation remained reasonable. Notably, the optimal balance between correlation and fairness was achieved at $\lambda = 1$.
- We extended our experiments to real datasets, illustrated in Figure 2 of the attached PDF in the general rebuttal. A consistent pattern emerged: as fairness improved (disparity decreased), accuracy (correlation) declined. Yet, by pinpointing an optimal $\lambda$, we significantly improved fairness without compromising accuracy.
In summary, our approach attains fairness with competitive accuracy, distinguishing it from traditional CCA methods that overlook fairness concerns.
> **Q5:** Were there any cases in the experiments where the framework significantly sacrificed accuracy to achieve fairness?
**Response:** Our empirical results show that our framework maintains accuracy while enhancing fairness. In NHANES, MF-CCA and SF-CCA correlations are only 0.5% and 1% lower than CCA in 2 dimensions. Max disparity drops by 26.4% and 29.5% compared to CCA, indicating fair enhancement with minimal accuracy loss across dimensions (Table 1).
Figure 3 in the attached PDF further validates this benefit. Each dataset panel displays two cases for specific dimensions. MF-CCA and SF-CCA columns show changes in correlation ($\rho$), max disparity ($\Delta_{\max}$), and disparity sum ($\Delta_\Sigma$). Pearson correlation $\rho$ change is slight, while $\Delta_{\max}$ and $\Delta_\Sigma$ changes are substantial, signifying fairness improvement without significant accuracy sacrifice. | Summary: This paper investigates the concept of Fair CCA, focusing on addressing the potential bias that arises when analyzing the relationship between two sets of variables using CCA, a widely utilized statistical technique. The conventional application of CCA fails to account for the impact of sensitive attributes like gender or race, leading to potential biases. In response, this study aims to bridge this gap by integrating fairness principles into CCA. The authors introduce the fairness issue within the context of CCA and propose two distinct methods to tackle it: a multi-objective approach and a single-objective approach, each offering unique strengths. The effectiveness of the proposed methods is substantiated through empirical and theoretical analyses, confirming their value in addressing the fairness concerns in CCA.
Strengths: 1. The problem addressed in this paper holds significant importance. Given the increasing influence of machine learning algorithms and methods on individuals and society, it becomes crucial to delve into the study of fairness within this domain. By mitigating bias issues in machine learning, we can contribute to a more equitable outcome and benefit vulnerable groups. While numerous works have explored fairness in machine learning, the majority of them focus on the supervised learning scenario. In contrast, this paper ventures into uncharted territory by examining the fairness issue in CCA, an unsupervised learning approach. This unique perspective underscores the urgency and significance of studying fairness within the context of CCA.
2. The concepts and methods presented in this paper exhibit a high degree of novelty. To the best of my knowledge, this is the first study to explore fairness within the context of CCA. Fairness, being a multifaceted concept, encompasses various definitions. In the realm of supervised learning, researchers have proposed different definitions such as demographic parity, equalized odds, and group sufficiency. Therefore, establishing a practical and reasonable definition becomes crucial. This paper introduces the notion of fairness criteria through correlation disparity error, which takes into account both global and group-wise correlations. The resulting fairness definition is intuitive and reasonable. The incorporation of fairness as additional objectives (in the multi-objective framework) or constraints (in the single-objective framework) is accomplished seamlessly, aligning with the natural progression of the problem. Furthermore, the authors introduce the Riemannian manifold in their solutions, which promotes convergence and facilitates computation, thereby introducing a novel aspect to the research.
3. The paper exhibits good writing quality, characterized by clarity and soundness. It effectively guides readers through its content, ensuring easy comprehension from the motivation and definition to the methods and solutions. Notably, Figure 1 provides a clear and intuitive visualization that enables immediate understanding of the proposed method's functionality. The effectiveness of the methods is supported by robust experimental results on synthetic and real data. Furthermore, Figure 3 serves as a compelling validation of fairness, as it visually demonstrates the improved proximity between the two groups after the projection using the proposed methods. Overall, the paper is meticulously crafted, maintaining a high level of clarity and rigor throughout.
Weaknesses: 1. Limited discussion on multiple modalities: While CCA is not restricted to two modalities, the paper primarily focuses on this scenario. It would be beneficial to discuss a more general setting involving multiple modalities and computing correlations under the fairness setting.
2. Figure 4 lacks an obvious trend: The authors could consider including Figure 9 from the supplementary file in the main body, as it provides a more intuitive demonstration of the method's effectiveness.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Clarification on "critical Pareto" (line 164): The definition of "critical Pareto" is unclear. It would be helpful to provide a more intuitive explanation for better understanding.
2. Elaboration on optimization problem (8) and steepest descent direction (lines 174-175): The statement regarding the optimization problem and steepest descent direction requires further elaboration to enhance clarity.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The method is currently limited to two modalities. Even though this is the most common scenario in CCA, it would be interesting to see how the method can be extended to more than two modalities.
Overall, this is a well-written and informative paper that makes a significant contribution to the field of machine learning. The proposed methods are novel and effective, and the experimental results are convincing. However, the method is currently limited to two modalities, and it would be interesting to see how it can be extended to more than two modalities.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1:** Limited discussion on multiple modalities: While CCA is not restricted to two modalities, the paper primarily focuses on this scenario. It would be beneficial to discuss a more general setting involving multiple modalities and computing correlations under the fairness setting.
**Response:** Thank you for your valuable suggestion. We will discuss extending F-CCA to multiple modalities or generalized scenarios in our paper as future work. Existing multiset CCA methods mostly revolve around the concept of maximizing pairwise correlation sums across subsets. We intend to leverage this concept to accommodate multiple modalities within our framework. Yet, these extensions come with notable challenges. Effectively handling fairness, bias, and solving single/multi-objective optimization problems across multiple modalities or sets requires fresh formulations and algorithmic approaches. This entails optimizing correlation or generalized correlation among sets while factoring in fairness, stability, and computational complexity, particularly when dealing with high-dimensional data issues [[ZYC20](https://pubmed.ncbi.nlm.nih.gov/32592530/),[TT11](https://link.springer.com/content/pdf/10.1007/s11336-011-9206-8.pdf),[X68](https://psycnet.apa.org/record/473742008-115?doi=1)]. Hence, developing a fair Generalized CCA approach requires addressing numerical stability and algorithm design, which we consider as future research problems.
> **W2:** Figure 4 lacks an obvious trend: The authors could consider including Figure 9 from the supplementary file in the main body, as it provides a more intuitive demonstration of the method's effectiveness.
**Response:** Thank you for your suggestion. We'll replace Figure 4 with Figure 9, illustrating aggregate disparities across multiple datasets with varying projected dimensions. In our discussion, we'll detail SF-CCA and MF-CCA in connection with Figure 9. Notably, CCA consistently exhibits greater disparities than our SF-CCA and MF-CCA, highlighting our framework's effectiveness.
> **Q1:** Clarification on "critical Pareto" (line 164): The definition of "critical Pareto" is unclear. It would be helpful to provide a more intuitive explanation for better understanding.
**Response:** Thank you for your comment. In multi-objective optimization, conflicting objectives are simultaneously optimized, leading to trade-offs between solutions. The Pareto front comprises solutions that can't be enhanced in one objective without worsening another, defining Pareto optimal solutions. Pareto critical solutions lie on this front, representing vital trade-offs between objectives and providing optimal choices with distinct trade-offs for decision-makers.
In Section 3.3, we introduced the concept of critical Pareto points. A point is considered critical Pareto if the image of the gradient of $\mathbf{F}$ at that point does not intersect with positive $M$-dimensional real numbers. In other words, the solution $\mathbf{P}$, as obtained from the optimization problem $(8)$, satisfies $\max_{i \in M} \langle \nabla f_i(\mathbf{U}, \mathbf{V}), \mathbf{P} \rangle = 0$. In simpler terms, there are no feasible directions $\mathbf{P} \neq \mathbf{0}$ that can simultaneously satisfy $\langle \nabla \mathbf{F}_i(\mathbf{U}, \mathbf{V}), \mathbf{P} \rangle \geq 0$ for all objectives while having at least one objective $i$ with $\langle \nabla \mathbf{F}_i(\mathbf{U}, \mathbf{V}), \mathbf{P} \rangle > 0$.
In summary, Pareto critical points ensure that no feasible directions can reduce the norm of the component-wise gradient. This concept is similar to the single-objective case, where critical points prevent feasible descent directions from reducing the gradient norm.
> **Q2:** Elaboration on optimization problem (8) and steepest descent direction (lines 174-175): The statement regarding the optimization problem and steepest descent direction requires further elaboration to enhance clarity.
**Response:** Thank you for your question. We have addressed subproblem (8) efficiently in Appendix A2 and discussed the manifold retraction operator in Appendix A1. A reference to Appendix A will be added in the main text, along with a summary of the following discussions.
We elaborate on subproblem (8) and its relation to the convergence of standard gradient descent in Euclidean space. The definition of Pareto critical on Manifold extends the classical critical condition $\nabla f (x) = 0$ for the single objective case on Euclidean space ($m = 1$) to vector optimization on Manifold. Indeed, following single-objective optimization, we are looking for the directions $\mathbf{P}_t$ such that $ \left\Vert\mathbf{P}_t\right\Vert \rightarrow 0$. To achieve this goal, we follow existing literature on multi-objective optimization [[FS20](https://link.springer.com/article/10.1007/s001860000043)], and first use Lemma 7 (appendix) to show that the unconstrained optimization subproblem (8) has a unique solution $\mathbf{P}_t$ on the manifold and can be expressed in closed form. Hence, the solution to subproblem (8) can be computed using a variety of minimax methods, and we found that the Goal Attainment Method [[G75](https://ieeexplore.ieee.org/document/1101105), [FP86](https://www.infona.pl/resource/bwmeta1.element.ieee-art-000004256366), [F86](https://www.sciencedirect.com/science/article/abs/pii/B9780080316659500122)] gives a more stable and accurate solution for our subproblem (8). Next, Lemmas 9-10 (appendix) demonstrate that $\mathbf{P}_t$ is a descent direction and offer an estimate for the function $\mathbf{F}$'s decrease along the solution of (8). These findings mirror the descent of the objective in the single objective case and are extensively explored in the optimization literature for gradient descent-type methods.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response. I would also maintain my initial score of accept.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer MdZ1
Comment: Thank you for your time and effort in reviewing our paper. | Summary: This paper addresses a fairness issue that arises in CCA, proposing a fair CCA that well trade-offs correlation disparity errors w.r.t. sensitive attributes against correlation w.r.t. global projection subspaces. It introduces two optimization frameworks (multi-objective and single-objective), then developing corresponding efficient algorithms based on the generalized Stiefel manifold together with convergence analysis. Experimental results both on synthetic and real datasets are provided to validate the theoretical findings.
Strengths: S1. The paper is very well written. Many illustrations (e.g., Fig. 1) are greatly helpful in grasping the ideas.
S2. The proposed optimization frameworks are convincing, and the corresponding translation techniques enable the use of efficient algorithms in the manifold literature. In addition, the theoretical analysis (Theorems 3 and 4) provides convergence guarantees of the algorithms.
S3. Experimental results emphasize the efficacy of the proposed approach, and the discussions therein are insightful.
Weaknesses: W1. The translations for efficient algorithms, (8) and (9), can further be detailed for those who are not familiar with the manifold literature.
W2. For Theorems 3 and 4, the proof sketches (or technical contributions if any) are preferred to be included in the main body.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See Weakness in the above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See Weakness in the above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1:** The translations for efficient algorithms, (8) and (9), can further be detailed for those who are not familiar with the manifold literature.
**Response:** Thank you for your valuable comment. We have addressed the efficient solutions to subproblems (8) and (10) in Appendices A2 and A3, and discussed the manifold retraction operator used in these sections in Appendix A1. To provide detailed explanations on the subproblems, we will add a sentence in the main text referring readers to Appendix A.
Regarding the translations for efficient algorithms for solving subproblems (8) and (10), we understand the importance of providing additional details, particularly for readers who may not be familiar with the manifold literature. Following your valuable suggestions, we will include comprehensive discussions about subproblems (8) to enhance clarity and provide necessary context. We note that since subproblem (10) can be regarded as a specific case of subproblem (8) with $M=1$, where the function $f$ is replaced by its regularized counterpart on a smooth manifold, we focus on the latter. We will primarily include a summary of the following discussions in the paper.
We provide a more detailed explanation of the multi-objective subproblem (8) and its connection to the convergence of standard gradient descent on Euclidean space. Notably, the definition of Pareto critical on Manifold extends the classical critical condition $\nabla f (x) = 0$ for the single objective case on Euclidean space (i.e., $M = 1$) to vector optimization on Manifold.
Indeed, following single-objective optimization, we are looking for the gradient-based directions $\mathbf{P}_t$ such that $ \left\Vert\mathbf{P}_t\right\Vert \rightarrow 0$. To achieve this goal, we follow existing literature on multi-objective optimization [[FS00](https://link.springer.com/article/10.1007/s001860000043)], and first use Lemma 7 (appendix) to show that the optimization subproblem (8) has a unique solution $\mathbf{P}_t$ on the manifold and can be expressed in closed form. Hence, the solution to subproblem (8) can be computed using a variety of minmax methods, and we found that the Goal Attainment Method [[G75](https://ieeexplore.ieee.org/document/1101105), [FP86](https://www.infona.pl/resource/bwmeta1.element.ieee-art-000004256366), [F86](https://www.sciencedirect.com/science/article/abs/pii/B9780080316659500122)] gives a more stable and accurate solution for our subproblem (8). Then, we provide Lemmas 9-10 (appendix) to show that $\mathbf{P}_t$ is actually a descent direction and provides an estimate for the decrease of the function $\mathbf{F}$ along the solution of (8). These results correspond to the descent of the objective in the single objective case and are well-studied in the optimization literature for gradient descent-type methods.
> **W2:** For Theorems 3 and 4, the proof sketches (or technical contributions if any) are preferred to be included in the main body.
**Response:** Thank you for the invaluable suggestion. Fortunately, NeurIPS permits an extra content page for the camera-ready version, which allows us to incorporate the following proof sketch:
We provide the proof of Theorem 3 in three steps:
* **Step I**: We use Lemma 7 to show that the optimization subproblem (8) has a unique solution $\mathbf{P}_t $.
* **Step II**: We provide Lemmas 9-10 to show an estimate for the decrease of the function $\mathbf{F}$ along the solution of (8); that is for any $\eta_t \geq 0$, we have
$
\mathbf{F}(\eta_t\mathbf{P}_{t}) \preceq \mathbf{F}(\mathbf{U}_t, \mathbf{V}_t) - ( \eta_t-L_F \eta_t^2/2) \left \Vert \mathbf{P}_t \right \Vert^2 \mathbf{1}_M.
$
This can be seen as an extension from single-objective manifold optimization [[Nicolas23](https://www.nicolasboumal.net/book/IntroOptimManifolds_Boumal_2023.pdf)] to the multi-objective counterpart.
* **Step III**: Finally, summing both sides of the inequality in Step II for $t=0, 1, \ldots, T-1$, and using our step size condition, gives the desired result.
The proof of Theorem 4 follows steps similar to Theorem 3:
* **Step I**: We use a single objective variant of Lemma 7 to establish the unique solution $\mathbf{G}_t$ for the optimization subproblem (10).
* **Step II**: A single objective variant of Lemmas 9-10 shows that for any $\eta_t \geq 0$, we have a descent inequality as follows:
$
f(\eta_t\mathbf{G}_{t}) \leq f(\mathbf{U}_t, \mathbf{V}_t) - ( \eta_t-L_f \eta_t^2/2) \left \Vert \mathbf{G}_t \right \Vert^2.
$
* **Step III**: Finally, by summing both sides of the inequality in Step II for $t=0, 1, \ldots, T-1$, and using our step size condition, we obtain the desired result. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable comments and suggestions. We summarize all the most-concerned questions raised by the reviewers below and present **new experiments**, accompanied by detailed explanations for the corresponding **attached Figures 1-3**.
* Reviewer **kQSh** has highlighted the need for more detailed explanations of the efficient algorithms utilized to solve subproblems (8) and (10).
* Reviewer **MdZ1** has emphasized the importance of addressing multiple modalities for CCA, suggesting an extension of our fair CCA approach to handle such scenarios.
* Reviewer **1Xpv** has expressed concerns about the generalizability of our framework and its applicability to various machine learning techniques or domains.
* Reviewer **6yqF** has observed that our approaches are more time-consuming compared to standard CCA , particularly in optimizing multi-objective problems.
* Reviewers **1Xpv** and **6yqF** have raised concerns regarding the trade-off between correlation disparity error (fairness) and correlation (accuracy).
To address these concerns, we have taken the following actions:
**a1:** We have provided more comprehensive descriptions and explanations of efficient algorithms utilized to solve subproblems (8) and (10), streamlining them for greater clarity and self-consistency within the main body of the paper. Although these descriptions were previously covered in Appendices A.2 and A.3, we aim to enhance their integration into the main text.
Further details are provided in the response to Reviewer kQSh.
**a2:** We have given discussions regarding the extension of our fair CCA framework to accommodate multiple modalities. This includes exploring the application of our approach to scenarios such as multiset CCA, while acknowledging the challenges posed by optimization, scaling, and numerical stability issues.
Further details are provided in the response to Reviewer MdZ1.
**a3:** We have elaborated on the generalizability of our optimization framework to diverse machine-learning techniques and domains. This includes its potential extension to more generalized CCA variations including kernel CCA, deep CCA, and multiset CCA as suggested by Reviewer MdZ1. We also explore its application in subsequent CCA-based tasks like clustering and classification. Our approach of enhancing fairness by encouraging global projection matrices to align centrally with local projection matrices on the smooth Manifold has broader applicability and can be extended effectively to enhance other methods such as fair PCA.
Further details are provided in the response to Reviewer 1Xpv.
**a4:** We have conducted additional experiments to assess the time complexity of our approach, focusing on its sensitivity to the number of samples ($n$) and features ($d$).
- The findings, illustrated in Figure 1 of the attached PDF, highlight that MF-CCA’s runtime is notably sensitive to these factors, contrasting with the relative stability of CCA and SF-CCA runtimes. This sensitivity stems from MF-CCA's multi-objective nature, which necessitates optimizing $2K+2$ matrix variables for $M$ component objectives. Yet, this complexity offers the advantage of minimizing hyperparameter ($\lambda$) tuning while still achieving strong outcomes.
- We would like to emphasize that a runtime comparison of our methods against baseline CCA is indeed included in Table 2 of the paper. Additionally, the runtime sensitivity concerning the number of subgroups ($K$) is discussed in Figure 11 of Appendix B.3.
In response to the valuable suggestion by Reviewer 6yqF, we have incorporated new figures in the attached file to enhance the comprehensiveness of our runtime-related analysis.
Further details are provided in the response to Reviewer 6yqF.
**a5:** We have conducted new experiments, expanded discussions, and provided insights into the trade-off between fairness and accuracy. We show that our methods maintain accuracy while enhancing fairness, as evidenced by percentage changes.
- In Figure 12 of the appendix, we demonstrate how performance reacts to the trade-off parameter $\lambda$ using synthetic data. Increasing $\lambda$ improves fairness but may slightly reduce accuracy, a reasonable outcome given its role in fairness regularization. Notably, the accuracy decline is modest despite significant fairness gains, showcasing our methods' capability to balance accuracy and fairness for SF-CCA.
- Extending our experiments to real datasets, we present analogous results in Figure 2 of the attached PDF. Additionally, Figure 3 of the attached PDF depicts the performance changes across various datasets, underscoring our methods' ability to enhance fairness without sacrificing accuracy to a significant degree.
Therefore, the combined analysis of attached Figures 2-3 along with Figure 12 in the appendix offers a comprehensive understanding of the trade-off between fairness and accuracy.
Further details are provided in the responses to Reviewer 1Xpv and 6yqF.
**Remark:** A few cited references in the response are absent from our paper's reference list. We assure their inclusion in the final version.
We appreciate the chance to address any concerns and questions raised by the reviewers during the discussion period.
Pdf: /pdf/f506205b746f1e5125638d73b1659c2fa36213f4.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
PolyDiffuse: Polygonal Shape Reconstruction via Guided Set Diffusion Models | Accept (poster) | Summary: The paper presents a diffusion-based approach for reconstructing polygonal shapes from floorplan images and auto-drive sensor images. The idea is to refine an initial rough reconstruction through iterative denoising steps which revert a forward process that diffuses regular reconstructions into random noises through per-element noise injection (though the notions of elements are not clearly given for different tasks). The challenge with learning such a denoising diffusion model, as identified by the paper, is the robustness to permutations of orders of elements that are introduced by linearization (though it's not convincing why a set-based transformer model would introduce such ordering problem inevitably). The paper proposes to address this challenge by learning a guided forward diffusion process, such that the diffusion paths for different permutations are encouraged to be separate. In test time, the rough initial reconstruction is first converted to corresponding target noise distributions by the learned guided diffusion networks, and then reverted back to an accurate reconstruction through iterative denoising.
The approach is tested on two polygonal shape reconstruction tasks, including floorplans and autonomous driving maps, and shows improvements over the baselines that produce initial reconstructions. Ablation studies are conducted on diffusion guidance and choices of several hyper parameters.
Strengths: The paper proposes to improve polygonal reconstruction works that produce results in a single network pass by iterative denoising of the reconstruction. It is thus expected that the results can have better quality than the initial estimates, which is confirmed by tests on two tasks.
The paper formulates the diffusion model for structured elements like polygons and polylines, and identifies the problem of ordering permutation in representing the structures, which may cause severe ambiguity for a network to learn the reverse process.
The paper designs a learned guided forward diffusion process to distinguish the different permutations causing representational ambiguity, and derives the updated diffusion models biased by the learned diffusions. Such adaptive and learned diffusion can be inspiring to other situations where hand-crafting a diffusion process can be problematic.
Weaknesses: The computational cost of the iterative process can be better illuminated, to help readers appreciate the comparative advantage obtained over the initial estimates produced by other methods.
The paper identifies another challenge with diffusion-based approach for reconstruction: a reconstruction task has a single solution, so the initial noise needs to be chosen carefully. I do not fully understand this statement, and cannot find any validation of this challenge in the experiments and discussions. On the other hand, the proposed guided diffusion process does not produce a single noise for an input polygonal shape either, but samples from the predicted Gaussian distribution.
The guided diffusion process is trained through contrastive learning on the initial reconstructions of training data by a different method. It is not clear how well the guided diffusion generalizes for data samples outside the training set. In contrast, a fixed diffusion procedure as adopted by DDPM and others does not have generalization issue, as the final noise distributions are known a prior. Indeed, the paper acknowledges that when the initial reconstructions come from hand annotations that are largely different from training reconstructions for HD Maps, the guided diffusion and denoiser do not work robustly.
It might be difficult to regard this work as the first one using diffusion models for reconstruction. In fact, diffusion models have been widely used for conditioned generation that resemble reconstruction, e.g. ControlNet. Also, people have explored using diffusion models for segmentation [1], which is a kind of reconstruction.
[1] SegDiff: Image Segmentation with Diffusion Probabilistic Models.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Computational costs of the iterative denoisers should be reported, and compared with the base cost of running the initial reconstructions.
2. Please provide clarifications and experimental evidence that perturbations of the optimal noise for a particular reconstruction can cause failure of denoise-based reconstruction.
3. In the footnote of page 3, it's stated that even a permutation-invariant model like Transformer has to pick a permutation as the representation of a data sample. This statement seems self-contradictory and needs clarification. For example, in DETR like works (including RoomFormer and MapTR) there is no ordering of the elements, and they rely on matching to compute the loss functions between two sets. Why cannot this approach to extended to the diffusion setting?
4. When defining the permutation contrastive loss (12), why not compare the x_t and x'_t, but compare x_0 and x_t instead? Is it essential to constrain the similarity between x_0 the uncorrupted data and x_t the noised version? This is not common for standard diffusion models.
5. Sec.4, feature representation, it's not clear how polylines/polygons of different vertex numbers are encoded and represented. This is an important detail as its encoding directly impacts how likelihood-based refinement (Fig4) can be done.
6. The line matching metrics augmented with angle threshold is not defined clearly. There is no formal description of its computation in either text or supplemental.
7. Define the standard DM model of Sec.5.3 clearly. Otherwise it's hard to justify any proposed component.
8. What is training schedule? It looks like training epochs but why would one bother with the training iterations so much.
9. Since it is acknowledged that the guided diffusion network may not generalize to hand-annotated input, I hope the authors can thoroughly discuss the limitations of the learned diffusion network, including how to decide if it's not general enough for a particular test dataset or input pattern, and what are the practical ways to enhance its generalization scope.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the valuable questions and thoughtful feedback. We answer your questions and comments as follows.
(A clarification: "elements" are the polygons/polylines in our tasks)
---
**W1&Q1. Computational costs of the iterative denoisers.**
Due to the space limit, we refer to *Q3 of Reviewer t57P* and *GR1 of the global response* for discussions and tables.
---
**W2&Q2. Clarifications and experimental evidence that perturbations of the optimal noise for a particular reconstruction can cause failure of denoise-based reconstruction.**
Table 3 of the main paper has shown quantitatively that GS-DM is better than a standard DM model that uses initial noise drawn from standard Gaussian (i.e., non-optimal noise distribution compared to the per-element Gaussian distribution estimated by our guidance network). We also included an illustrative toy experiment in *GR3 of the global response*, showing standard DM model easily fails with bad initial noises.
---
**Q3. The footnote on page-3, and why cannot DETR's approach be extended to the diffusion setting?**
We thank for a great question. Here is the clarification, which we will also be added to the paper: The use of a matching loss (e.g., the Hungarian matching loss in DETR) could make DMs permutation-invariant indeed. However, we cannot pick an arbitrary loss in the formulation. The L2 denoising loss of DMs is strictly derived from either the variational lower bound (DDPM) or denoising score matching (NCSN) and is order-aware by definition (i.e., based on one specific serialization of the data). Specifically, in DDPM's derivations, the loss consists of a sequence of KL divergences between the forward and the reverse Gaussians, which cannot be easily extended to a permutation-invariant version -- We are unaware of any DM formulation that is inherently permutation-invariant.
---
**Q4. Eq.12: why not compare the x_t and x'_t, but compare x_0 and x_t instead?**
The x_t and x'_t are permutation variants of the same noisy sample. They should not interfere (i.e., get close) with each other, which would make denoising ambiguous. We provide more explanations here and will clarify in the paper.
The core motivation of GS-DM is "The guidance network learns to guide the noise injection in a per-element manner, such that a sample x_0 remains separated from its permutation variants throughout the diffusion process". The second triplet loss term in Eq.12 encourages x_t to be closer to x_0 than any other permutation-equivalent variants x'_t. x_0 is the *anchor* using the terminology of triplet loss, and x_t and x'_t are the *positive* and *negative*, respectively. In triplet loss, the positive (x_t) and negative (x'_t) are not directly compared to each other, but compared against the anchor (x_0) respectively. Please see Sec.3.3 of the supp for full details of the permutation loss.
---
**Q5. Sec.4, feature representation.**
We thank for the question and will clarify in the paper. A data sample is represented by a $N\times M\times 2$ tensor. $N$ is the number of polygonal instances and $M=\max_{i=1}^N N_i$ is the maximum number of vertices in an instance. The last dimension is each vertex's (x,y) coordinates. Instances with less than $M$ vertices are padded, and a mask is generated to handle the padding in the model. The order of instances is based on an arbitrary permutation. In the model, the vertex coordinates are transformed into sinusoidal positional encodings and augmented with two additional position encodings for the vertex index and instance index. All these positional encodings are concatenated and processed by an MLP to produce the feature with shape $(N*M)\times D$, where $D$ is the hidden dimension of the denoising network (we use the same hyperparameter as the base method).
---
**Q6. Clear descriptions of the line matching metrics augmented with angle threshold.**
The full details of the angle-aware matching criterion are in *Sec.4.2 of the supplementary document*. The original average precision(AP) metric considers a predicted instance as true positive once the Chamfer-distance criterion is met. Our augmented AP considers a predicted instance as true positive only when both the Chamfer-distance and angle-distance criteria are met. We will add more explanations.
---
**Q7. Define the standard DM model of Sec.5.3 clearly.**
We thank for the question. The following is the definition, and we will clarify in the paper. The standard DM model is the vanilla DDPM with images as the condition. There is *no guidance network* and *no proposal generator*. The reverse process starts with a noise sampled from the standard Gaussian distribution and gradually denoises it into the final reconstruction using the same sampler as GS-DM. It also uses the same denoising network architecture as GS-DM. This model helps to ablate the key designs of our GS-DM.
---
**Q8. What is the training schedule, and why bother with it so much.**
"Training schedule" means the training iterations and the corresponding learning rate decay strategy. We mentioned this because diffusion models need more iterations to converge well than the one-shot counterparts (i.e., RoomFormer and MapTR). To make our comparisons as fair as possible, we increase and match the training iterations of the base methods with ours.
---
**W3&Q9. Discussions about the limitations of the learned diffusion network, including how to decide if it's not general enough for a particular test dataset or input pattern, and what are the practical ways to enhance its generalization scope.**
Due to the space limit, we refer to our answer in *Q2 of Reviewer pHyL* for thorough discussions and analyses of failure modes. Please also see extended discussions with concrete examples on the HD mapping task in *GR2 of the global response*.
---
**W4. The first one using diffusion models for reconstruction.**
We thank for the comment. We will revise the claim and limit the scope to geometry reconstruction.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. I hope the authors can incorporate the updates and discussions to the final version. | Summary: This paper proposes a Guided Set Diffusion Model for reconstruction, addressing the challenges of ambiguous denoising and selecting appropriate initial noise. By learning guidance networks, the model ensures distinct representations for samples with multiple permutations in structured geometry. During testing, the model uses the guidance networks to initialize Gaussian noise and denoises to reconstruct polygonal shapes conditioned on sensor data. The approach is evaluated on floorplans and HD maps, demonstrating significant advancements over existing methods and enabling practical applications.
Strengths: The paper takes a step towards formulating reconstruction as a generation process conditioned on sensor data, providing insights into advancing Diffusion Models for shape reconstruction. The approach of guiding noise injection per element to ensure separation from permutation variants during the diffusion process effectively addresses the ambiguous denoising issue. The PolyDiffuse method is well-presented and achieves promising results for layout reconstruction. Overall, this paper makes valuable contributions and is commendable.
Weaknesses: While floorplan and HD map reconstruction are important problems, it appears that the GS-DM formulation is more suitable for layout or other semantic reconstruction tasks rather than real 3D shape reconstruction. It would be beneficial if the authors could provide a discussion on this aspect and given some suggestion of addressing the limitations of PolyDiffuse in extending to 3D shape reconstruction tasks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the valuable comments and appreciate the overall positive feedback.
The question of extending PolyDiffuse to 3D reconstruction is quite open-ended and worth a detailed discussion. We provide our thoughts in the following, and the answer is organized into two parts: 1) describe the settings of 3D shape reconstruction tasks where extensions of PolyDiffuse are possible; 2) explain the key challenges of applying PolyDiffuse to these tasks in the current stage. We will also add discussions for future works to the paper.
---
**1. Task settings:** Since PolyDiffuse reconstructs the data as a set of polygonal structures, its potential extensions to 3D shape reconstruction will be reconstructing 3D CAD models or compact low-poly meshes. We briefly describe two related lines of previous works for indoor scenes and objects, respectively:
(1). *Indoor CAD reconstruction:* This line of work aims to reconstruct CAD-quality models from RGB-D captures of a large indoor scene, facilitating valuable applications in the construction industry. Typical works include Structured Indoor Modeling[R1] and some of its follow-ups[R2, R3]. The entire scene usually contains many rooms, and the rooms consist of a set of planar polygonal structures for walls, floors, and ceilings.
(2). *Object shape reconstruction:* Another line of work studies generative models of objects (usually from a single category) as compact low-poly meshes. As these generative models can naturally be paired with encoders (for point clouds or images) to support conditional generation, they are also able to reconstruct object shapes from sensor inputs. From our knowledge, typical works include PolyGen[R4] and BSP-Net[R5]. The compact meshes can be regarded as a set of variable-length polygons, while the polygons efficiently share the vertices.
Given the similarities in the data representation, the above two lines of work are potential directions to extend PolyDiffuse.
---
**2. Key challenges:** (1). the lack of large datasets is an immediate challenge, as paired data with sensor inputs and CAD-quality annotations is very rare, even considering synthetic ones. PolyGen[R4] already observed overfitting issues when training on pre-processed ShapeNet data. Previous works on indoor CAD reconstruction [R1-R3] rely on heuristics to produce the final compact CAD-quality models, and there are no end-to-end deep learning methods due to the lack of training data; (2). While PolyDiffuse is good at reconstructing the geometry of the polygonal shapes (i.e., the accurate coordinates of vertices), it needs a reasonable proposal generator to provide the "meta-information" -- the number of polygons and the number of vertices per polygon. While getting reliable meta-information is not hard in floorplan and HD map reconstructions, it becomes more challenging for large-scale indoor scenes or complicated objects (e.g., there might be curved surfaces).
---
In summary, extending GS-DM and PolyDiffuse to more challenging 3D reconstruction tasks is an interesting yet challenging future direction. We need large datasets as well as some reliable approaches to estimate the meta-information.
---
### References
[R1]. Ikehata, Satoshi, Hang Yang, and Yasutaka Furukawa. "Structured indoor modeling." In Proceedings of the IEEE international conference on computer vision, 2015.
[R2]. Macher, Hélène, Tania Landes, and Pierre Grussenmeyer. "From point clouds to building information models: 3D semi-automatic reconstruction of indoors of existing buildings." Applied Sciences 7, no. 10 (2017): 1030.
[R3]. Tang, Shengjun, Xiaoming Li, Xianwei Zheng, Bo Wu, Weixi Wang, and Yunjie Zhang. "BIM generation from 3D point clouds by combining 3D deep learning and improved morphological approach." Automation in Construction 141 (2022): 104422.
[R4]. Nash, Charlie, Yaroslav Ganin, SM Ali Eslami, and Peter Battaglia. "Polygen: An autoregressive generative model of 3d meshes." In International conference on machine learning, pp. 7220-7229. PMLR, 2020.
[R5]. Chen, Zhiqin, Andrea Tagliasacchi, and Hao Zhang. "Bsp-net: Generating compact meshes via binary space partitioning." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 45-54. 2020.
---
Rebuttal Comment 1.1:
Comment: Thanks for answering my questions. | Summary: This paper proposes a novel method for reconstructing multiple polygon shapes using a conditioned diffusion model. The method first learns a score-matching-based prior diffusion model from data. This model is then used to denoise the sensor data, resulting in the reconstruction of the polygon shapes. To handle the permutation order of the polygonal elements, the method separates the target Gaussian distribution of a sample from its permutation variants.
Strengths: 1. A nice solution to handle the permutation order in the reconstruction of multiple geometric elements in conditioned diffusion model.
2. Comprehensive experiments to evaluate the performance of the proposed method.
3.This paper is well-written and easy to follow. Figure 2 provides a clear and concise illustration of the basic idea.
Weaknesses: 1. The maximum number of polygons and their vertices used in the training of the diffusion model should be clarified. Are these numbers consistent with the ones listed in lines 183-184?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Overall, I like the idea of integrating a diffusion model as a prior into the reconstruction pipeline. However, I wonder if it is possible to handle the permutation order in a different way. For example, we could first train a network to predict the structure of the floorplan, such as the number of rooms and their basic bounding boxes. Then, we could denoise each shape individually. This approach might be easier to train and faster in inference.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations of the proposed method in Sec. 7. I also want to know whether the denoising speed is fast enough for a real-time HD map reconstruction.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the constructive questions and appreciate the overall positive comments on our presentation, idea, and experiments. We answer your questions/concerns as follows.
---
**Q1. The maximum number of polygons and their vertices used in the training of the diffusion model should be clarified.**
Thank you for the catch. We provide the details and will also clarify them in the paper. For the floorplan reconstruction task, the maximum number of polygons and maximum vertices per polygon used during training are 20 and 40, respectively. These choices are inherited from the base method RoomFormer and are actually larger than the maximum possible numbers of the Structured3D dataset (reported in L183-184). For the HD map reconstruction task, the maximum number of polygons/polylines and maximum vertices per instance used during training are 30 and 20, respectively. Note that we employ the same map representation as the base method MapTR -- all the map instances have 20 uniformly interpolated vertices.
At test time, PolyDiffuse takes whatever the proposal generator produces and does not limit the maximum number of polygons and vertices per polygon.
---
**Q2. If it is possible to handle the permutation order in a different way. For example, we could first train a network to predict the structure of the floorplan, such as the number of rooms and their basic bounding boxes. Then, we could denoise each shape individually. This approach might be easier to train and faster in inference.**
This is an interesting idea -- first training a simple network to predict the "meta-information" of the structures (i.e., number of instances, bounding box of each instance, and the rough number of vertices), and then employing a diffusion model to denoise the meta-information of each instance individually (should be in-parallel denoising for acceleration). Since the shapes are denoised separately, there is no bothering with the set ambiguity issue. However, we have two concerns below, which might potentially limit the performance:
(1). Some pre-processing steps might be necessary for employing a diffusion model to denoise each polygonal instance separately. For example, we might need to normalize the coordinate space and crop image features based on the bounding box of each instance, such that the denoised polygons are bounded by the boxes and do not interfere with each other. The ground-truth bounding boxes can be used at training time. But at test time, recovering from an inaccurate initial bounding box (e.g., a too-small box) could be hard.
(2). The inter-instance relation is a crucial type of pattern in structured polygonal data. For example, neighboring polygons effectively share corners and edges (as in floorplan), and polylines are usually parallel, orthogonal, or connected to each other based on certain regularities (as in HD maps). The individual design makes it impossible for the denoising network to model the inter-instance interactions, which could largely impair the performance. To alleviate this issue, we might need additional clever designs to allow message passing between the instances during the in-parallel denoising. On the contrary, our PolyDiffuse can directly borrow the network architecture of state-of-the-art one-shot methods (e.g., RoomFormer, MapTR) to implement the denoising network.
(3). Using bounding boxes as the intermediate representation is a bit task-specific and might not support arbitrary initial inputs (e.g., human scribbles as rough annotations). On the contrary, the guidance network in our GS-DM formulation is very general: 1) the training simply relies on a distance-based loss without specifying a typical type of input; 2) it can take different initial results at test time (i.e., either the results from existing methods or arbitrary human inputs).
Overall speaking, we believe this idea is feasible and can be faster than the current formulation of PolyDiffuse w/ GS-DM. But given the above concerns, more complicated designs will be needed to get a satisfactory performance, and the method will be a bit task-specific.
---
**Q3. Whether the denoising speed is fast enough for a real-time HD map reconstruction.**
We thank you for the question and have measured the running time of PolyDiffuse based on MapTR-tiny (using ResNet-50 image backbone as in Table.2 of the main paper). We provide the details in this rebuttal and will include related information in the paper.
The running time is measured with *a single Nvidia RTX A5000 GPU* on our machine. The time used by the image encoding (i.e., processing six perspective images with the ResNet and aggregating all features into the BEV space by a Transformer) is **roughly four times** the time used by the Transformer decoder to produce the final denoising outputs. Since all the denoising steps share the same BEV features, the image encoding will only run once at test time.
**FPS stats:** MapTR-tiny has 14.3 FPS in our computation environment. With the same computing resources, a 5-step PolyDiffuse has 7.2 FPS, and a 10-step PolyDiffuse has 4.4 FPS. If we count both the running time of the MapTR proposal generator and PolyDiffuse, the results are 4.8 FPS for 5-step and 3.4 FPS for 10-step.
We also provide an updated table in *GR1 of the global response* to better demonstrate the "running time vs. performance" relation for both tasks. Please consider taking a look there.
---
Rebuttal Comment 1.1:
Title: Thx
Comment: Thanks for clarifying my questions. I have no further concerns. | Summary: - The paper introduces PolyDiffuse, a novel structured reconstruction algorithm that incorporates Guided Set Diffusion Models (GS-DM).
- The core concept involves splitting the reconstruction pipeline into two distinct stages.
- The forward diffusion process focuses on learning guidance networks to address denoising ambiguity effectively.
- The guidance networks establish individual Gaussians as target distributions for each polygon and are learned prior to the denoising training phase.
- The reverse denoising process utilizes the sensor data as a condition to reconstruct polygonal shapes.
- The authors conduct extensive experiments involving two polygonal structure reconstruction tasks.
- Through comprehensive quantitative and qualitative evaluations, the results demonstrate that GS-DM outperforms existing state-of-the-art methods in terms of performance and quality of the reconstructions.
Strengths: - The paper is well-written and easy to follow.
- The derivation of the GS-DM is appropriately detailed.
- By employing a multi-stage approach and introducing guidance, the proposed method effectively resolves ambiguity in polygonal shape scenarios.
- The proposed method can evaluate reconstruction quality through likelihood evaluation.
- The proposed method performs well even when using only rough annotations as input.
Weaknesses: The GS-DM may be sensitive to the specific guidance, as mentioned in Section 7 (Limitations).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - It is worth exploring whether incorporating the condition $y$ as an input to the guidance network leads to performance improvements.
- What are the failure cases when applying PolyDiffuse to off-the-shelf methods? Is there further analysis regarding these cases?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the time and effort in providing insightful feedback and appreciate the overall positive comments. We answer the questions and concerns as follows.
---
**Q1. It is worth exploring whether incorporating the condition as an input to the guidance network leads to performance improvements.**
Yes, this is a potential strategy to improve the quality of the guidance. However, we chose not to do this in the current formulation due to three considerations:
(1). The proposal generator (i.e., either an existing method like RoomFormer or MapTR, or a human annotator to provide rough annotations) already takes $\mathbf{y}$ as the input to produce the initial proposal $\hat{\mathbf{x}}_0$, so having both $\hat{\mathbf{x}}_0$ and $\mathbf{y}$ as inputs to the guidance network might not be necessary. Furthermore, the outputs of the guidance network are combined with Gaussian noise to produce noisy data during denoising training and are adjusted by the denoising network to get the final reconstruction during sampling, so they do not have to be precise.
(2). Incorporating the sensor input $\mathbf{y}$ could bring significant computation costs since it requires image encoding and cross-attention layers (for extracting image features). Our current guidance network is simply a lightweight Transformer with two self-attention layers;
(3). The guidance training loss is designed to preserve the permutation of the input data in the forward process, and there is no regression or classification-style supervision as in single-shot methods (e.g., RoomFormer and MapTR). With this loss design, the guidance network might not effectively use the information of $\mathbf{y}$.
---
**Q2. What are the failure cases when applying PolyDiffuse to off-the-shelf methods? Is there further analysis regarding these cases?**
Three types of failure modes are observed in our experiments, and they are mentioned and roughly explained in the main paper (Sections 5.1, 5.2, and 7). We will provide more detailed discussions in the following and will also add them to the paper. Please also see extended discussions in *GR2 of the global response*, where we provide a figure to show concrete examples of the failure cases on the HD mapping task.
(1). **Wrong number of vertices.** This type of failure happens when the proposal generator predicts wrong vertex numbers for the polygons/polylines, which has been shown in Fig.4 of the main paper. We have provided a feasible solution to address this failure mode by leveraging the likelihood evaluation property of our DM-based method, where we can locally search different vertex numbers for the polygons and get the one with the highest likelihood as the final reconstruction result.
(2). **Wrong number of polygonal instances (i.e., elements of the set).** As mentioned in the Limitations in Sec.7, this type of failure case happens when the proposal generator misses entire polygons/polylines, or predicts redundant elements (e.g., the ground truth is a large polygon, but the proposal generator predicts two separate small polygons).
A potential solution for the former case is still based on the likelihood evaluation property: i) Run the proposal generator with a lower confidence threshold to get an initial result with more instances, which has a better recall but lower precision; ii) Extract subsets of the initial result as different initial proposals for PolyDiffuse, run PolyDiffuse and evaluate the likelihood of the final reconstruction; iii) Pick the one with the highest likelihood as the final result. This strategy could potentially recover missing instances but needs much more running time.
The latter case is more challenging, and PolyDiffuse cannot rectify these redundant elements in the initial proposals. We hope there can be future works to handle the number of instances in an elegant way.
(3). **Inaccurate shape/location caused by limited generalization ability of the networks to unseen styles of initial proposals.** This type of failure case is also mentioned in the Limitations subsection of Sec.7. Concretely, in our current training settings, the guidance network is trained only with "ground-truth proposals" during the guidance training stage (Algorithm1 in the paper), and it also only takes the "ground-truth proposals" to produce the guidance for training the denoising network during the denoising training stage (Algorithm2 in the paper). As a result, the networks might not perfectly generalize to other styles of the initial proposals, like the circle-shaped rough annotations used in our experiments, which can lead to *inaccurate location or shape* of the final reconstruction.
One potential solution to alleviate this generalization issue is to train the guidance network and denoising network with different types of initial proposals at training time by augmenting the ground-truth data. Possible augmentation strategies include: adding noise to the vertex coordinates, or converting the G.T. elements into some canonical shapes (e.g., circles as in our experiments) to mimic the style of rough annotations, etc.
Another potential solution is to train separate guidance and denoising networks for each type of initial proposal and applies the corresponding models at test time with respect to the exact type of initial reconstruction. But this solution will induce more computational cost for training.
(Note: we use "initial proposal" and "initial reconstruction" interchangeably in the above descriptions)
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my concern with a detailed response. | Rebuttal 1:
Rebuttal: We thank all reviewers for your time and efforts in providing valuable comments and constructive feedback. We are glad that all the initial reviews are on the positive side (two accept, two weak accept, and one borderline accept).
---
We use this global response as **a complement to the individual responses**, which will answer reviewers' questions that require additional figures or tables. The content of this global response is organized as follows, and we will also incorporate them into the paper.
**GR1:** Tables and discussions for the computational costs of PolyDiffuse on the two tasks. **(for Reviewer-t57P and Reviewer-uTWe)**
**GR2:** A figure of PolyDiffuse's example failure cases on the HD mapping task, as well as corresponding discussions. *This is an extension of our response to Q2 of Reviewer-pHyL*. **(for Reviewer-pHyL and Reviewer-uTWe)**
**GR3:** An illustrative toy experiment to show how standard DM fails, with a figure and discussions. **(for Reviewer-uTWe)**
---
**GR1. (Reviewer-t57P, Reviewer-uTWe) The computational cost of the denoising process, and the comparison to the methods for generating initial proposals.**
Table 1 and Table 2 in the rebuttal PDF present the speed-performance tradeoff of PolyDiffuse on the HD map and floorplan reconstruction tasks, respectively. Note that during the denoising process, the image encoding parts of the denoising network only run once, while the transformer decoder part runs for multiple rounds. In our computation environment, *the time of image encoding vs. the time of transformer decoder* is 4:1 for HD map reconstruction (w/ MapTR's network architecture) and 2:1 for floorplan reconstruction (w/ RoomFormer's network architecture).
Speed is not a crucial factor for floorplan reconstruction, so we can just use more denoising steps in real applications for better reconstruction quality. For HD map reconstruction, since FPS is an important consideration for online applications, we might have to pick the number of denoising steps more carefully. In general, our method provides a reasonable speed-performance tradeoff. Furthermore, since PolyDiffuse is not restricted to a fixed task-specific model for the proposal generator, our performance/efficiency can keep improving when better task-specific base methods appear in the future.
---
**GR2. (Reviewer-pHyL, Reviewer-uTWe) Extended discussions/analyses on the failure modes of PolyDiffuse.**
*(This response is an extension of our response to Q2 of Reviewer-pHyL, please read that response first)*
We have discussed three types of failure modes in *Q2 of Reviewer-pHyL*. The first type (i.e., wrong number of vertices) has been covered in *Figure 4 of the main paper*. We now use Figure 1 of the rebuttal PDF to show the other two types of failure cases with the HD mapping task and provide some discussions below.
**Wrong number of instances.** The examples (1) (2) (3) in Figure 1 of the rebuttal PDF are typical failure modes of having a wrong number of instances. When MapTR serves as the proposal generator, the mistakes of missing instances or predicting duplicate/redundant instances are hard to be recovered by PolyDiffuse. Our *response to Q2 of Reviewer-pHyL* only provides an alleviating strategy based on search and likelihood evaluation, and more elegant approaches are needed to better handle this challenge. Also, if the proposal generator predicts wrong semantic labels, PolyDiffuse is not able to recover.
**Inaccurate shape/location caused by limited generalization ability of the networks to unseen styles of initial proposals.** When using rough human annotations as the initial proposals in the paper, we assume that the correct number of polygonal instances and semantics labels are given, and PolyDiffuse is responsible for generating accurate coordinates for all the vertices. However, as demonstrated by (4) (5) (6) of Figure 1, although the results are visually reasonable, many predictions are not considered as true positive by the matching criteria.
As analyzed in our *response to Q2 of Reviewer-pHyL*, the networks don't perfectly generalize to the circle-shaped initial proposals, leading to slight shifts in instance location or errors in shape. Note that the matching criteria of the HD mapping task are very strict (autonomous driving must guarantee safety). This inaccurate shape/location issue also appears when using MapTR initial proposals due to some inaccurate initial predictions, but the overall precision of using MapTR proposals is higher than using rough annotations, as shown in Table 2 of the main paper.
Another cause of inaccurate location/shape is the noisy image inputs caused by occlusions or bad weather conditions, but this is a common challenge for all HD mapping methods (e.g., MapTR, VectorMapNet, etc.) and is out of the scope of this paper.
---
**GR3. (Reviewer-uTWe) Clarifications and experimental evidence that perturbations of the optimal noise for a particular reconstruction can cause failure of denoise-based reconstruction.**
In Figure 2 of the rebuttal PDF, we provide a toy experiment to demonstrate how standard DM models easily fail and why a good initial noise is important. Note that we have clarified the definition of standard DM in the *response to Q7 of Reviewer-uTWe*. In this example, the data contains a single toy example with 6 rectangular shapes, so there are $6! = 720$ permutation-equivalent representations. After sufficient training, we draw four samples using the image-conditioned denoising process. The DDIM sampler is used with 10 sampling steps, so the randomness only comes from the initial Gaussian noise. As the figure shows, only Sample 3 gets the correct reconstruction result. With the challenges of set ambiguity, a standard conditional DM even has trouble overfitting a single data sample, and easily gets wrong outputs when the initial noise is inappropriate.
Pdf: /pdf/825018b5a7fef6267c0e91cd71a26f01bcef79ae.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The manuscript introduces a adaptation of the DDPM paradigm to enable denoising sets (instead of single data points like images). The method called GS-DM does this by introducing by adding noise per set element via learned guidance networks.
This approach requires the addition of a proposal generator that initializes the number of sets and their parameters for the denoising process.
The manuscript applies this new set-based diffusion model to two tasks: generating floorplan polygons from floorplan images and maps from autonomous car images. The proposed method performs well in comparison to related work.
Strengths:
The authors identify a core limitation in DDPMs when it comes to set-based data and propose a sound way that empirically works well to solve it. The paper is well written, figures are high quality and support the writing well.
The evaluation is thorough and clearly demonstrates that (1) the set-based diffusion is superior to the default DDPM (table 3) and (2) that the denoising approach helps improve state of the art results from MapTR and RoomFormer when applied on top of their final outputs. It is also very interesting to see how simple the inputs for denoising can be (as simple as the polygon centers) and still perform at almost state of the art levels.
Since the underlying concept of set diffusion models is very general and could apply in various other domains, this work is significant to a larger audience than just the 3d reconstruction or perception community.
Weaknesses:
While the paper is well written it could use some more clarity around how the guidance network is learned(line 122ff) since this is one of the key novelties. Just from reading the text I would not be able to reproduce the method at this point.
- How is the neares negative permutation found? It seems like this could be an intractable problem very quickly?
- is the triplet loss computed based simply on the polygon corners (for example?)
The key limitation of the current approach is the fact that the number of elements in the set has to be initialized correctly by the proposal generator during inference. This is acknowledged by the authors and a reasonable limitation that can be addressed in follow on work imo.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - In Fig3: is it mu/sigma _phi or _psi?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The limitations are adequately addressed by the authors at the end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the valuable comments and appreciate the overall positive feedback on the writing, the experiments, and the potential extension of our approach to broader domains. We will address your questions/comments as follows.
---
**Weaknesses: Clarifications of the guidance network learning.**
We are sorry for the confusion. Details are presented in the supplementary (in particular, the permutation loss Eq.12 in *Sec.3.3 of the supplementary document*). We briefly answer your two sub-questions below and will add these clarifications to the main paper.
(1). **nearest negative permutation in the permutation loss**: Yes, finding the nearest negative permutation immediately becomes intractable as the number of elements ($N$) grows. As described in Sec.3.3 of the supplementary document, our solution is to approximate the permutation loss (Eq.12 of the main paper) with $N$ element-level triplet losses (Eq.9-12 of the supp), which reduces the computational cost from $O(N!)$ to $O(N^2M)$, where $M$ is the maximum number of vertices of an element (i.e., a polygon or polyline) in the data sample $\mathbf{x}_0$. Empirically, our experimental results show that this approximation works well and helps learn reasonable guidance under feasible computations.
(2). **Triplet loss computation:** The triplet loss is computed based on the L1 distance between the corners' coordinates of the two polygons or polylines (the $D(i, j)$ in Sec3.3 of the supp). We will clarify this detail.
---
**Questions: Symbols in Fig.3.**
It should be phi for both bar_mu and bar_sigma in Fig.3. We thank you for the catch and will fix the mistake.
---
Rebuttal Comment 1.1:
Title: response
Comment: Thanks for clarifying my questions. | null | null | null | null | null | null |
Are Vision Transformers More Data Hungry Than Newborn Visual Systems? | Accept (poster) | Summary: This work investigates the learning efficiency of vision transformers by comparing their invariant object recognition performance to that of newborn chicks, when being exposed to similar number of images. The authors find that ViTs learn view-invariant representations like chicks and therefore claim that they are not more data hungry than some real animals.
Strengths: The idea behind this paper, which is to train visual models using the same input real organisms can receive and to model the animal development using such models, is a good idea. The paper is also in general well written with clear logic. To confirm that the results are general, the authors tested multiple architectures. The embedding space visualization result shows the potential explanation for why multiple attention-heads outperform single attention-head models.
Weaknesses: The biggest weakness is that the starting point, that ViTs are thought to be more “data-hungry” than brains, is already proven false in earlier works. A paper published last year (2022) on ECCV, which is on arxiv even earlier, shows that ViTs achieve reasonable performance even when trained on 2048 images [1]. Although the authors repeatedly mention that ViTs are more thought to be more “data-hungry”, they did not provide any citation supporting this point. They also did not discuss why this point is still worth being investigated given this earlier work.
In addition to this, the linear classification training used during the test phase is also unjustified in the paper. This training requires supervision signal to be given to the model, but this signal does not seem to be available to the real animal. This makes the interpretation of the test phase results potentially wrong. A much simpler classifier that requires very little or no training should be used in this phase, such as a correlation classifier to earlier stored “imprinted” object hidden representations. The current results may very well be an overestimation of the network’s performance under a simpler classifier.
It is also unclear to me whether the simulated input during the training phase closely reflect the input visual statistics real chicks can receive. The authors designed certain agent movement patterns, is this pattern similar to what real newborn chicks would do? How about the frequency of this “movement cycle”? Is this leading to more or less augmentations compared to the real animal?
[1] Cao, Yun-Hao, Hao Yu, and Jianxin Wu. "Training vision transformers with only 2040 images." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Address the earlier works better about whether ViTs are more data hungry than real animals.
- Replace the supervised linear classification in the test phase with a much simpler classifier that requires no or little training.
- Does the simulated input during the training phase closely reflect the input visual statistics real chicks can receive?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback and time. We address your 3 critiques below.
>#1: The biggest weakness is that the starting point, that ViTs are thought to be more “data-hungry” than brains, is already proven false in earlier works...
We thank the reviewer for pointing out this citation. This concern highlights an opportunity for us to address this debate in our Related Works section. In the new text, we will discuss that the general consensus in the field is still that ViTs are data hungry models (especially relative to CNNs). For example, a recent (2022) review of vision transformers notes that, “CNNs encode prior knowledge about the images... that reduces the need of data as compared to Transformers that must discover such information from very large-scale data.” (Khan et al., 2022, ACM Computing Surveys). Indeed, even the Cao, Yu, and Wu (2022) paper explains that ViTs, “achieve competitive results with CNNs but the lack of the typical convolutional inductive bias makes them more data-hungry than common CNNs” and that “ViT and variants achieve competitive results with CNNs but require significantly more training data. For instance, ViT performs worse than ResNets with similar capacity when trained on ImageNet (1.28 million images). One possible reason may be that ViT lacks certain desirable properties inherently built into the CNN architecture that make CNNs uniquely suited to solve vision tasks, e.g., locality, translation invariance and hierarchical structure. As a result, ViTs need a lot of data for training, usually more data-hungry than CNNs.”
We will also point readers to work that has tried to reduce ViTs’ dependence on large scale training data. For example, some researchers have proposed adding a CNN architecture to ViTs to take advantage of the spatial inductive bias inherent in CNNs (e.g., Yuan et al., 2021, ICCV). Another approach from Cao et al. (2022) is to introduce pre-training with artificial augmentations to the dataset (e.g., multi-crop and CutMix).
Critically, however, our approach differs in 3 critical ways from Cao et al. (2022). First, we used a single visual object in our training set with no extra data augmentation, whereas Cao et al. used 2040 images of many different objects with extra data augmentation. Thus, our results demonstrate that ViTs are even less data hungry than previously shown. Second, we used biologically plausible data augmentations that were generated by simulating the visual experiences of an agent moving through virtual replicas of the animal chambers. In contrast, Cao et al. (2022) used artificial augmentations like CutMix (an augmentation that would be impossible for an actual animal to perform). Third, our claim is that ViTs are not more data hungry than newborn visual systems. The only way to address this claim directly is to train ViTs and newborn animals with the same tasks and test ViTs and newborn animals with the same tasks. In this sense, our findings are unique from any existing work on ViTs (including Cao et al., 2022). Moreover, our results have important scientific implications. ViTs have the potential to be powerful image-computable models of newborn visual systems, but these models will not be accepted if they are thought to be more data hungry than brains. Our results directly contradict this widely held assumption, and thus, will shape our growing understanding of the relationship between brains and transformers.
>#2: ...A much simpler classifier that requires very little or no training should be used in this phase, such as a correlation classifier to earlier stored “imprinted” object hidden representations. The current results may very well be an overestimation of the network’s performance under a simpler classifier.
We used supervised linear classifiers to evaluate the unsupervised ViTs because this is the standard approach in computational neuroscience for quantifying the performance of self-supervised models. While supervised learning was not present in the chick experiments, linear classifiers are simply a formal way of quantifying the degree and form of learned representations. In neuroscience, information that is available directly via a linear readout is generally considered to be explicitly represented by a model or brain region. The linear classifier does not provide the ViT with new information but merely measures the relative placement of different images within the model’s existing feature space. Linear classifiers are also a reasonable approximation of downstream neural computation, since linear classifiers express a plausible rate-code model for downstream decoder neurons (i.e., linear weightings followed by a single threshold value; Hong et al., 2016, Nature Neuroscience).
Accordingly, the most common way to evaluate unsupervised models that are trained on biologically plausible training data (e.g., simulated first-person images from chick experiments) is to test the unsupervised model’s ability to classify objects using a supervised linear readout (c.f., Zhuang et al., 2022, NeurIPS; Zhuang et al., 2021, PNAS; Orhan, Gupta, & Lake, 2020, NeurIPS).
>#3: It is also unclear to me whether the simulated input during the training phase closely reflect the input visual statistics real chicks can receive. The authors designed certain agent movement patterns, is this pattern similar to what real newborn chicks would do? How about the frequency of this “movement cycle”? Is this leading to more or less augmentations compared to the real animal?
We designed our agents to use the same six degrees of freedom for head movements (roll, pitch, yaw) as newborn chicks. However, it is not possible to affix cameras to a chick’s head, so our simulations may not reflect the movement cycles of the animals. In future research, we plan to characterize the nature of chicks’ actual visual statistics by using tools like DeepLabCut and Unity to yoke the virtual agent to actual chicks during the Input Phase of the experiment.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. The authors addressed my first question well, but the response to my other two questions are not convincing to me.
For the second question, what the authors want to achieve in this work is a strict comparison with the performance from the real chicks, which was measured in an earlier work using a specific testing method. I don’t think the linear classifier training is possible under that testing method, as training the classifier requires a lot of supervision, which simply does not exist in that testing phase. Although linear classifier training was used in earlier works, it does not mean that it can be used here as the authors attempt to compare the performance of the networks to the animals. Also, the final difference between the models and the animals is small. So, it’s highly possible that how this readout method is done is critical to this comparison.
For the third question, can the authors show how this “cycle frequency” influences the performance? If the performance is not influenced significantly by this cycle frequency parameter across a large range of values (especially for lower values), I am more convinced that the current result is not just due to a specific value for this parameter.
I will increase my score to 4 right now and am willing to further increase the score if the authors can address my questions here well.
---
Reply to Comment 1.1.1:
Comment: >Can the authors show how this “cycle frequency” influences the performance?
We thank the Reviewer for clarifying their critique regarding “cycle frequency.” In prior experiments with CNNs, we tested whether different parameters of cycle frequency (i.e., different amounts of data augmentation due to movement patterns) significantly impact performance. Specifically, in a “no head rotation” condition, the agent collecting the images stared continuously at the object as they moved around the chamber, so the object was always in the middle of the camera. Conversely, in the “head rotation” condition, the agent rotated their head 30° in each direction along the 3 axes of rotation (yaw, pitch, roll) in a random order while looking at the object. Consequently, the agent in the head rotation condition collected many more unique views of the object than the agent in the no head rotation condition, due to the data augmentation produced by the head rotations. We found that view-invariant object recognition performance was nearly identical across these two conditions, so we focused on the “head rotation” condition for the present ViT experiments.
We are, however, in the process of adding a “no head rotation” condition to the present paper testing ViTs to address this concern.
>What the authors want to achieve in this work is a strict comparison with the performance from the real chicks, which was measured in an earlier work using a specific testing method. I don’t think the linear classifier training is possible under that testing method, as training the classifier requires a lot of supervision, which simply does not exist in that testing phase. Although linear classifier training was used in earlier works, it does not mean that it can be used here as the authors attempt to compare the performance of the networks to the animals.
We agree with the Reviewer: a direct test of the learning abilities of newborn chicks and ViTs will ultimately require an entirely unsupervised training/testing approach, since the chicks were trained/tested without any supervisory signals. In our original submission, the ViT (encoder) was trained in an unsupervised manner, but we used a supervised linear classifier (decoder) to evaluate the features learned by the ViT. Following the Reviewer’s suggestion, we will add new experiments to the paper that use unsupervised decoders to evaluate the ViTs. Specifically, we used a modification of the unsupervised technique described by Ayzenberg & Lourenco (2022_eLife), initially developed to test human babies.
The chicks’ preferences in Wood (2013) can be conceived as a measure of alignment between the test stimuli and the chick’s internal representation of their imprinted object. Chicks will approach a stimulus they perceive to be the most similar to their internal representation of their imprinted object (i.e., the stimulus that produces less mismatch, or ‘error,’ between the stimulus and their representation). To approximate this in silica, we converted each trained ViT model into an autoencoder that was “imprinted” to the same stimulus as the chicks and tested on the same stimuli as the chicks. An autoencoder is trained to reconstruct the original stimulus from a lower dimensional representation, and the reconstruction loss is higher when there is a larger mismatch between a given stimulus and the internal representation.
We converted the models into autoencoders by attaching a simple fully connected downstream decoder to the (trained and frozen) ViT; then we performed unsupervised training on the decoder, using the same images that were used to train the ViT encoder. Consequently, both the encoder and decoder were only trained on images of a single object shown from a single viewpoint range, akin to the chicks. Once the decoder was trained, we used the output from the decoder to quantify how similar each test stimulus was to the ViT’s internal representation.
The chicks were tested using a two-alternative forced-choice test (2AFC). To approximate the 2AFC task, we fed two object images into the autoencoder (i.e., ViT + decoder), then measured the error signal for each image. If the error signal was smaller for the imprinted object than the novel object, the model was scored as ‘correct.’ If the error signal was larger for the imprinted object than the novel object, the model was scored as ‘incorrect.’
The model scored 62.1%, which is significantly higher than chance level (50%): *X*2(1, N = 576) = 34.03, *p* = .000000005. This new result shows that a purely unsupervised learning model, in which both the encoder (ViT) and decoder are trained without any supervised signals, can learn to solve the same view-invariant recognition task as newborn chicks when trained ‘through the eyes’ of chicks.
We hope this fully satisfies all of the remaining concerns. We thank the Reviewer for encouraging us to pursue the unsupervised learning experiments: we think they greatly improve the paper! | Summary: This study challenges the notion that Vision Transformers (ViTs), which excel in many computer vision benchmarks, require more training data than biological brains. The study involved controlled experiments on both ViTs and newborn chicks in impoverished visual environments, using a video game engine to simulate the chicks' environments and train self-supervised ViTs that use time as a teaching signal, similar to biological systems. The authors found that when trained in conditions similar to those of newborn chicks, ViTs effectively performed the same object recognition tasks, suggesting that ViTs are not necessarily more "data hungry" than biological systems and can develop animal-like object recognition capabilities through generic attention-based learning mechanisms.
Strengths: The main strength of this paper is the novelty of training chicks with a regimented visual schedule and ensuring that agents are trained the same way for fair comparison. It is a sorely needed paradigm for claims of data efficiency during organism developmental timescales.
Weaknesses: There were a couple clarifications regarding the experimental setup and model comparison that can be clarified in the camera ready. I have listed my specific questions in the next section below.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In line 233, it is stated that the “embodied visual data streams acquired by newborn animals are rich in their own right.” How do you ensure that the agent visual stream is matched to that of the animal in terms of number of samples?
2. In lines 111-112, it is said that “the study produced data with a high signal-to-noise ratio”. What does this mean exactly, and how is it quantified? What would noise be in this context?
3. In line 122, it is stated that “The agent received visual input (64x64 resolution images)”. Is this matched physically to the visual acuity of newborn chicks? If not, what would the appropriate resolution be, and what happens when you train ViTs at that resolution?
4. In line 125, it is mentioned there are 4 rearing conditions. What are they? I didn’t seem to see that in the main text but could have mistakenly missed it.
5. Is Figure 3A on heldout test performance, where Ntest = 1? Please clarify in the figure caption.
6. Can you make any statements or predictions about data efficiency of chicks when there is more than one object to be learned? (cf. lines 234-235).
7. Minor: How do temporally-contrastive CNNs match ViTs in this context in terms of number of training samples? It would be good to have some architectures other than ViT to show whether adding more inductive biases increases or decreases training efficiency.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes, the authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback. We address your questions below:
> In line 233, it is stated that the “embodied visual data streams acquired by newborn animals are rich in their own right.” How do you ensure that the agent visual stream is matched to that of the animal in terms of number of samples?
Great question. Ultimately, the field does not have well established procedures for comparing the number of training images across animals and machines, which is why we focused on controlling the visual environment available to chicks and ViTs, rather than controlling the number of training images per se. However, while we cannot ensure that the number of samples is matched between the animals and ViTs, we can make a rough comparison based on the rate of learning in biological systems. Researchers estimate that biological visual systems perform predictive error-driven learning every 100 ms (corresponding to the 10 Hz alpha frequency originating from deep cortical layers; O’Reilly et al, 2021). If we assume that newborns spend about half their time sleeping, this would correspond to ~430,000 images in their first day.
We also emphasize that a widely accepted critique of ViTs is that they are more data hungry than animals. This critique stems from the observation that ViTs are typically trained on millions of images across thousands of object categories, which seems excessive compared to the visual environments of newborn animals. Our paper shows that the embodied data streams acquired by newborn animals are rich in their own right. During everyday experience, newborn animals spontaneously engage in self-generated data augmentation, acquiring large numbers of unique object images from diverse body positions and orientations. We show that ViTs, like animals, can leverage these embodied data streams to learn high-level object features in impoverished visual environments (which we will clarify in the Camera Ready).
> In lines 111-112, it is said that “the study produced data with a high signal-to-noise ratio”. What does this mean exactly, and how is it quantified? What would noise be in this context?
By “noise,” we mean unexplained inter-subject variation, which we can measure as the standard deviation of performance between chicks. By “signal-to-noise ratio,” we mean the size of the effect (the mean difference between chick performance and chance performance) compared to the variability. We quantify the signal-to-noise ratio as Cohen’s d (the “standardized mean difference” = mean difference / standard deviation). Our revision will point readers to Wood & Wood (2019) for a detailed explanation of how this method produces precise measurements of performance with large effect sizes.
> In line 122, it is stated that “The agent received visual input (64x64 resolution images)”. Is this matched physically to the visual acuity of newborn chicks? If not, what would the appropriate resolution be, and what happens when you train ViTs at that resolution?
Great question! Chick visual acuity is about 25% of human visual acuity, which is why we initially used lower resolution images. However, we also performed new experiments to test whether different image resolutions would impact our results. To do so, we trained both the ViT-CoT and VideoMAE models with 224x224 resolution images. We did not observe a large difference between the small (64x64) and large (224x224) image resolutions, so we do not believe that image resolution strongly impacted our results. We will add these new experiments to our Camera Ready version.
> In line 125, it is mentioned there are 4 rearing conditions. What are they? I didn’t seem to see that in the main text but could have mistakenly missed it.
In the chick experiments, each chick was raised with one of two objects (Object 1 vs. Object 2) presented from one of two viewpoints (front vs. side), making 4 rearing conditions in total. We will add this information to the Supplemental Materials.
> Is Figure 3A on heldout test performance, where Ntest = 1? Please clarify in the figure caption.
Yes, in Figure 3A, performance is held out, cross-validated test performance where Ntest = 1. (The results for Ntest = 11 are in the Supplemental Materials.) We will make this explicit in the Camera Ready version.
> Can you make any statements or predictions about data efficiency of chicks when there is more than one object to be learned?
This is a great question. At this point, we cannot make any statements about the data efficiency of chicks when there is more than one object to be learned, since all of our chick experiments have focused on how chicks learn their first object representation. In the near future, we will start exploring how chicks learn multiple objects, potentially providing data for distinguishing between candidate image-computable models of newborn visual systems.
> How do temporally-contrastive CNNs match ViTs in this context in terms of number of training samples? It would be good to have some architectures other than ViT to show whether adding more inductive biases increases or decreases training efficiency.
We agree that a comparison between CNNs and ViTs would be valuable for determining how different inductive biases may impact the development of vision. To do so, we added new experiments comparing CNNs with ViTs, in which we evaluated a temporally contrastive CNN architecture (SimCLR-CLTT) using the same training and test conditions we used for the ViTs (ViT-CoT in original submission and new VideoMAE results added during rebuttal). We found two interesting patterns: (1) both CNNs and ViTs could solve the task (i.e., learn view-invariant object representations from impoverished visual environments) and (2) the stronger inductive biases of the CNNs led to a small but significant bump in performance over ViTs. These findings suggest that starting with a CNN architecture is beneficial, but not necessary, for learning view-invariant object representations.
---
Rebuttal Comment 1.1:
Title: Thank you!
Comment: Thank you for your thorough & detailed response, including the additional experiments. I find the results with the CNN especially intriguing, and I am glad you will be including it in the revision. I hope to see this paper accepted, and I will be advocating for it! | Summary: This article examines the issues of data-hungry between chicks and ViT. For the sake of experimental accuracy, this paper makes a dark room and controls some variables in terms of organisms. In terms of ViT, ViT-CoT is proposed, and data enhancement, embedding and simple modification of the model are carried out. Finally, a comparative experiment was carried out, which proved the experimental results well.
Strengths: Clear writing and good figures for easy understanding.
Interesting perspective on comparing newborn and vit.
Weaknesses: There is a big difference between living organisms and computers. So that some settings in the experiment are not enough to rule out the influence of other aspects. Conclusions are somewhat subjective. For example, the newborn chicken is designed by humans and implemented by human, and as far as I learn from this paper, this design can only imitate limited features of living organisms. Also, why chicken can be represented for the “newborn visual system”.
Inadequate verification of the appropriateness of the task for addressing the topic issue. As far as I learn from this paper, the final task for ViT and newborn system is to recognize object across novel views, and the objects are composed of virtual and designed geometry element. The difficulty and appropriateness of this task should be studied thus the conclusion can be convinced.
This paper proposes a new contrastive loss function to train ViT as self-supervised learning. However, the other powerful self-supervised method, i.e., mask some part of the image and train the model to reconstruct the complete image, is not discussed and tested. This leads to insufficient method design and experiment conduction, since the subject of this paper gives no limitation to the training method.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: On the (129 line, 4 page) “Thus, to compare the learning abilities of newborn chicks and ViTs, we developed a new self-supervised ViT algorithm...... Specifically, ViT-CoT architecture was initialized with three different seeds ……”. This paper uses time-based data augmentations and studies spatial information from the input images. This paper adds a lot of advantages to the training of vit, but the chick does not. For example, when a chick learns a three-dimensional model of an object, it may be naturally strongly correlated with its spatial variation. The chicks may be confused when it sees three-dimensional objects move but their own position in space does not change.
We all know that the difference between living organisms and computers is that living organisms may encounter fatigue, illness, or other bad conditions during the learning process, resulting in a decline in learning ability. Even without these interfering factors, it still needs to grow and eat by itself. And it is disturbed by the smell, temperature, feather movement, etc. How to control or reduce the occurrence of such problems in this experiment? If the learning ability of organisms is underestimated because of such problems, how to draw that conclusion of this paper?
In Figure 3, why does the accuracy of the chick not increase with the training time?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper lacks discussion on the setting of the proposed task and effects of other training methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback and time. We address your critiques below.
*Reviewer raises two concerns: (1) Why use chicks as a model system for studying newborn vision? (2) Our design can only address some features of animals.*
For (1), our revision will clarify why chicks are optimal for studying newborn visual learning. We used chicks as a model system because chicks can be raised in controlled environments from the onset of vision. This allows us to control all of the visual experiences (training data) acquired by the animal: an essential feature for directly comparing the learning abilities of animals and machines. Moreover, chicks can inform our understanding of human vision because the avian brain has similar cells and circuitry as mammalian brains (Güntürkün & Bugnyar, 2016; Jarvis et al., 2005; Karten, 2013). Avian brains also have a common large-scale organization as mammalian brains, including a hierarchy of sensory information processing, hippocampal regions, and prefrontal areas.
For (2), we agree that our study only tests specific learning abilities of animals, but this is an essential first step in this research program. Ultimately, we introduce a powerful new experimental approach (the first of its kind) for directly comparing the learning abilities of animals and transformers. It would be interesting for future research to explore whether ViTs can replicate other features of living organisms (e.g., object parsing, face recognition, motor development, navigation, and audition).
*Reviewer argues that we did not verify the appropriateness of the task for addressing the topic issue.*
Our revision will clarify the appropriateness of the task. We modeled the virtual objects and task after a previous study that tested for invariant object recognition in adult rats (Zoccolan et al., 2009). The objects are well designed for studying view-invariant recognition because changing the view of each object produces a greater within-object image difference than changing the identity of the object. Thus, recognizing these particular objects from novel views requires an animal (or model) to learn abstract object features that can generalize across large, novel, and complex changes in the object’s appearance.
*Reviewer argues that we had an insufficient method design because we did not test other ViTs that use masking approaches.*
We agree that MAEs are an important candidate for unsupervised visual learning. As suggested, we performed new experiments with the VideoMAE model. We trained/tested VideoMAE using the same approach that we used to train/test ViT-CoT. We found that VideoMAE, like ViT-CoT, can perform the task, showing that vision transformers are sufficient to drive the development of animal-like object recognition.
*Reviewer argues that the temporal learning approach is appropriate for ViTs, but not for chicks.*
The reviewer is concerned that ViT-CoT uses time-based data augmentation, but that chicks instead use learning mechanisms that are correlated with their own position in space. We will revise the manuscript to emphasize that chicks are highly sensitive to temporal information, thereby justifying our use of models that leverage temporal information to learn. For example, newborn chicks use temporal information when parsing objects from backgrounds (Wood & Wood, 2021a), binding color and shape features into integrated object representations (Wood, 2016), building view-invariant object representations (Wood & Wood, 2016; 2018), and building view-invariant face representations (Wood & Wood, 2021b).
*Reviewer argues that animals have needs (e.g., hunger, fatigue) that are not present in the ViT experiments, making comparisons across chicks and ViTs difficult.*
We agree; it is inevitable that any comparison between humans/animals and machines will open the possibility for fatigue, illness, hunger, etc. to lead to noisy estimates of biological learning. To minimize the impact of these factors, our design uses automation and long test periods. As reviewed in Wood & Wood (2019), this methodology produces data with a high signal-to-noise ratio and strong test-retest reliability across individual chicks.
We also note that, while many factors may contribute to differences between animals and machines, the critical factor needed to compare learning across animals and machines is that both are provided with the same training data and tested on the same tasks. Currently, no other method comes close to achieving this goal. Accordingly, while other benchmarks can be used to evaluate whether trained models behave like mature animals, these benchmarks cannot be used to determine whether models use similar learning algorithms as animals (because the animals and models learned from different training data). We therefore argue that a major contribution of our paper is to add a new paradigm to the field that can unambiguously reveal whether animals and machines learn in the same way.
*Reviewer wonders why chick performance did not increase across the training period.*
Thank you for pointing out the lack of clarity. Fig. 3 shows the performance of ViT-CoTs (not chicks) trained on various numbers of training samples. The red line showing chick performance is provided so readers can easily see the number of training samples needed to reach chick-level performance. We will clarify this in the revision.
*Reviewer argues that the paper lacks discussion on the setting of the proposed task and effects of other training methods.*
To make the setting of the task clearer, we will direct readers to Wood (2013) for a detailed description of the task. Moreover, to provide a new dataset to the community (and to help clarify the setting of the machine task), we will provide the full set of training and test images that we simulated in the “digital twin” experiments. Finally, to address the effects of other training methods, we also added new experiments using VideoMAEs and CNNs (see rebuttal PDF).
---
Rebuttal 2:
Comment: Dear Reviewer xTn4,
We are nearing the end of the discussion period with authors.
The authors have responded in detail to your review, so pls minimally read and acknowledge their rebuttal, and state which (if any) issues you still do not find to be satisfactorily addressed.
You should do so as soon as possible.
Thanks,
AC | Summary: The authors present a study showcasing, in one specific scenario, a vision transformer can match the visual learning performance of newborn chicks. The setup is as follows. Newborn chicks are raised in a dark enclosure for a week, and given only one visual stimulus from a variety of angles. Then, the chicks are presented with the visual stimulus from new angles, along with different visual stimuli. The chicks show a preference towards the same stimulus from new angles, which means that they have learned an angle-invariant generalization about the stimulus. Vision transformers are shown the same stimulus in a simulated environment, are trained on the simulated images, and are probed for the ability to learn angle-invariant generalizations too.
Strengths: Although not entirely novel (similar experiments have been done with CNNs and similar results have been shown with them), I do think that this is interesting work, and I did not find any issues with the experimental setup or results. It extends the previous findings from CNNs to transformers to showcase that transformers do not require more data than a newborn animal to learn the same generalization in one specific scenario.
Weaknesses: Not sure I really buy the 3-frame learning window idea. Does the algorithm actually not work if you only associate two frames with each other instead of three?
Do you say how many parameters your vision transformers have anywhere? This would be really useful to know.
Instead of a simulator, why not run the test with an actual robot driving/walking around? It would make the work more novel and interesting in my opinion.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I tried to put my points in "weaknesses" as questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are adequately addressed in my opinion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable questions and feedback, which we address below:
> Not sure I really buy the 3-frame learning window idea. Does the algorithm actually not work if you only associate two frames with each other instead of three?
We agree that the learning window duration is an interesting hyperparameter. We chose a learning window of 3 frames to approximate the temporal learning window in biological visual systems (around 300ms, which corresponds to 3 of our simulated frames). As the Reviewer points out, the learning window may influence the algorithm’s ability to learn visual features. To test this, we replicated our experiments with a 2-frame learning window. We find that the 2-frame learning window also performs well on the task, demonstrating that the algorithm’s success is not dependent on a learning window of 3 frames.
> Do you say how many parameters your vision transformers have anywhere? This would be really useful to know.
Our vision transformers have 5.8M (1-head architecture), 16.9M (3-head architecture), 36.4M (6-head architecture), and 59.4M (9-head architecture) trainable parameters, respectively. We provide details on the number of trainable parameters in Table 1 of the Supplementary Materials.
> Instead of a simulator, why not run the test with an actual robot driving/walking around? It would make the work more novel and interesting in my opinion.
This is an excellent idea for future research! In the present study, we focused on whether ViT models can account for the visual processing abilities of newborn animals (which is why we focus on “disembodied” models of vision that do not produce actions). However, we are currently building a platform for rearing embodied virtual agents in “digital twins” of the controlled rearing chambers, which will allow us to train real and artificial animals with the same data and test them with the same tasks.
We currently use virtual agents (rather than robots) because of their scalability. First, we can run virtual simulations with dozens of agents simultaneously, at faster than real-time speeds. As a result, we can collect more data and test more models. Second, using virtual agents allows our work to scale beyond our lab. We can make “digital twins” of our chambers available to researchers outside of our lab, so that they can replicate our findings and/or test new models. Finally, once we discover core learning algorithms that develop like newborn animals, we will plug our virtual brains into physical robots and test whether policies learned virtually generalize to the real world.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses to my concerns - I think that your paper would benefit from including the information that you responded with!
I still like the paper and my rating still leans slightly in the accept direction. Although, from reading your rebuttal it seems that you are planning to revise your paper to highlight these items:
* the visual twin approach is novel.
* ViTs are not more data hungry in general.
I think that you will face some criticism for making these claims, and I would advise against the wording in your rebuttal. From my understanding, the visual twin approach is not entirely novel because it was done with CNNs in prior work. And, yes, I would say that you've provided some evidence that ViTs are not more data hungry, but only in a very specific case. My rating might lean slightly in the reject direction if you make the claims in the paper too strong. | Rebuttal 1:
Rebuttal: Our paper tackles a question at the heart of biological and artificial intelligence: Are vision transformers (ViTs) more data hungry than newborn visual systems? The answer to this question will have significant implications on (1) how transformers are viewed by AI researchers (e.g., brain-like or not brain-like?) and (2) whether scientists embrace transformers as viable models of biological visual systems. The dominant assumption is that ViTs are more data hungry than visual systems, but we showed that this assumption is incorrect. To demonstrate this, we introduced a novel “digital twin” approach that allowed newborn chicks and ViTs to be trained in the same environments and tested with the same images. This approach made it possible—for the first time—to directly compare the learning abilities of animals and machines. We found that ViTs are not more data hungry than chicks: when ViTs are given the same training data (visual experiences) as chicks, they develop common view-invariant recognition abilities as chicks.
The Reviewers largely agreed that this is an important topic and that our “digital twin” approach has promise. However, the ratings were ambivalent for three reasons:
First, in the original submission, we only tested one ViT model (ViT-CoT). To address this critique, we added new experiments testing VideoMAE models: state-of-the-art models for temporal ViTs. As shown in Fig. 1 (rebuttal PDF), our ViT-CoT model outperformed VideoMAE by 15%. This result shows that ViT-CoT is particularly strong at learning high-level visual features in impoverished environments, akin to newborn chicks.
Second, in the original submission, we did not compare ViTs to CNNs. It was thus unclear whether the spatial inductive bias of CNNs helps or hinders performance. To address this critique, we added new experiments testing CNNs that learn via contrastive learning through time (SimCLR-CLTT; Schneider et al., 2021). As shown in Fig. 2 (rebuttal PDF), both CNNs and ViTs can learn high-level object features from impoverished visual environments. This finding shows that the inductive bias of CNNs benefits, but is not necessary for, the development of view-invariant object recognition.
Third, Reviewers questioned the scope of the project and appropriateness of the task/model system. To address these critiques, we will revise our manuscript to emphasize that our study is the first of its kind, providing a unique opportunity to directly compare the learning abilities of brains and machines. Our revision will also emphasize that newborn chicks are uniquely suited for animal-machine comparisons because chicks are the only animal that can be reared in strictly controlled environments from the onset of vision, which allowed us to fully control their training data and give that same training data to ViTs. Finally, our revision will emphasize that our results have important scientific implications. ViTs have the potential to be powerful image-computable models of newborn visual systems, but these models will not be accepted if they are thought to be more data hungry than brains. Our results directly contradict this widely held assumption, and thus, will shape our growing understanding of the relationship between brains and transformers.
Pdf: /pdf/103191cb6577204f15254d8aced4af0460e9d01b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Exposing Attention Glitches with Flip-Flop Language Modeling | Accept (spotlight) | Summary: The paper introduces Flip-Flop Language Modeling (FFLM) to examine closed-domain
hallucinations of Large Language Models (LLMs). Flip-Flop languages are
synthetic benchmarks which model a single bit of memory and its operations: read
(r), ignore (i), and write (w). For example, the string "w 1 i 0 r 1" is valid
as the same bit was written and read, but "w 1 i 0 r 0" is invalid. The task of
the Transformer is to attend to the last write operation and generate the
correct bit for the read operation.
The authors discover that the Transformer exhibits a long tail of reasoning
errors even on this simple task (they call these attention glitches), and these
errors are very hard to mitigate. Interestingly, the LSTM can perform the task
perfectly, with 20x less training.
They also introduce a preliminary study on the effect of attention sharpening on
training Transformers to simulate the flip-flop automaton. Intuitively
sharpening should help as the Transformer only needs to attend to the most
recent write. They find that sharpening can indeed help but also introduce new
errors.
Strengths: The paper provides very interesting and significant insights into the workings
of the Transformer architecture.
Some of the most interesting findings:
- Transfomers make reasoning errors on the long tail
- 1-layer LSTMs can learn the same task perfectly
- Transformers exhibit high variance in the OOD test error between random seeds
and even iterations withing the same training run
- 10B-scale LLMs cannot solve the task robustly
- architectural and regularization changes can have orders-the-magnitude effects
on OOD performance
- in contrast to LSTMs, it seems that completely eliminating attention glitches
is not possible for Transformers
Flip-flop languages are also interesting as they are realizable by
self-attention (as the authors explain in lines 221-224), so they could
represent a reasoning task which should actually be easier for LLMs.
The paper is also very well written, I could not find any errors or opaque
sections.
Weaknesses: I think the main weakness of the study is that it's limited to flip-flop
languages. That being said, I think this is just because of the foundational
nature of the study: examining other languages would be outside the scope of the
paper and could be explored in future work.
Still, although the findings are very interesting, some of them may be specific
to flip-flop languages. For example, (R4), training on rare sequences, would be
very hard to do in the general case of natural language, and so I'm not sure
what (R4) means in the general case. Also, (R5) says that scaling improvements
are orders of magnitude smaller, I wonder if that stands for natural language
too as it's much more diverse.
The authors include some natural language experiments isomorphic to a flip-flop
language and a discussion about natural language in the Appendix.
Even though Section 5.3 is a preliminary study it would be good to include some
quantitaive results.
I believe that Fig. 5 (d) is incorrect: the error marked in red should be
$\sigma_0$ and $\sigma_1$.
The notation $\Delta$ could be introduced, I don't think it's widely
used in probability.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In the flip-flop language the value can be only 0 or 1. It could be interesting
to extend the task to more values (like 0-15, or a single byte, 0-255). How
worse do you think the Transformer and the natural LMs would perform in that
case? Do you think that the LSTM would still perform very well?
The attention sharpening regularization uses $-\Vert\alpha\Vert_{2}$, but usually
$L_1$ norm is used for sparsification. I found the reasoning behind
$L_\infty$ and entropy in the cited paper (Zhang et al.), but I still don't
see how using $L_2$ norm sparsifies. Could you elaborate on this?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The limitations are addressed thoroughly in the Conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the encouraging feedback and for the helpful suggestions! We are glad that you find our results interesting and significant, and would like to discuss the remaining concerns below.
**[W1] Generalizing insights to natural languages**
This is a great question; we provided some discussions in [G1] of the global response. We’d also like to comment on “incorporating rare sequences'', which is as of now the only effective solution. While it is unclear how to formally define “rare sequences” in natural languages, we intuitively think of it as “improving data diversity”, which has been proven effective by several related works in various setups. The closest to our setting is Jelassi et al. 2023, which showed that adding as few as 10 OOD samples can greatly improve the generalization performance on arithmetic tasks. More generally, data diversity is shown to be the main contributor of CLIP’s robustness (Fang et al. 2022), and a properly chosen data mixture can improve the pretraining results (Nguyen et al. 2022, Xie et al. 2023).
Regarding scaling improvement, we have some partial evidence that while scaling helps improve the performance, it cannot fully eliminate the errors. Please see (R3) in our paper, as well as discussions in Appendix B.2.
**[W2] Quantitative results on mechanistic interpretability**
In Appendix B.5, we provided quantitative results on the regularization’s effect on weight norms (Figure 11) and difference in attention patterns (Figure 14, Figure 16(a)). We are happy to incorporate further suggestions to the results!
**[W3] Potential typo in Fig 5(d)**
Thank you for the close read! The figure is actually correct, that is, the error is indeed on the read token marked as the red $\bot$. Note that a seemingly incorrect attention pattern does not necessarily mean a wrong prediction; please see our footnote 8 and the discussion in “Appendix B.5 – Sparsity regularization helps sharpen the attention – Are attention patterns reliable for interpretability?” In this case, Even though the attention for the last $\sigma_0$ and $\sigma_1$ are on the wrong token, their predictions are still correct. We apologize for the confusion and will make this clear in the revision.
**[Q1] - FFLM with multiple symbols**
We experimented with more symbols and report the results in Figure 2 of the global response PDF. Note that the 1-layer LSTM solves all of the task variants perfectly; please see [G4] in our global response for more discussions.
**[Q2] Attention sharpening with L2**
Note that we are regularizing on vectors after the softmax, where each attention vector has an L1 norm equal to 1 (i.e. on the simplex, which is denoted by $\Delta$). The largest L2 norm on the simplex is achieved when the vector is one-hot, hence encouraging a large L2 norm is equivalent to sharpening the attention. We will clarify this in the revised paper, and also define $\Delta$ clearly.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for your insightful comments and for pointing out the quantitaive
results in the Appendix. I also appreciate Figure 2 in the response. I believe
that the paper will be even better with the proposed revisions.
I still strongly believe that this negative result is very significant, the
paper should be accepted, and it could be the basis for much future work. To
emphasize this I increased my score to 9.
---
Reply to Comment 1.1.1:
Comment: We are pleased that our response addressed your concerns, and thank you very much again for your valuable suggestions and support! | Summary: The paper identifies a simple task for which the Transformer architecture fails. The authors introduce flip-flop language modeling, towards quantifying the extrapolation capabilities of different architectures. Transformer models are determined to suffer from long tail errors, in a phenomenon termed "attention glitches".
Throughout the paper, this new synthetic task is analysed and the failure modes of Transformer models are identified. The authors embark and compare different techniques to mitigate the failures. Some theoretical justification and some mechanistic interpretability are also provided.
Strengths: 1. The paper discusses a very simple task for which recurrent networks triumph, but Transformer present failure cases, attributing these failures to the inductive bias of the model. The task proposed is very simple and adequately motivated.
2. Long range dependencies are a common failure case of Transformer models. FFLM allows for easy to check failures for reasoning tasks.
3. Extensive experiments with different regularizations are provided.
Weaknesses: 1. I feel some of the implications and connections to Ji et al., 2023 are not highlighted. From Figure 3, depth does not seem to have a direct consequence on performance.
2. Although some theoretical justification is provided, assumptions are in cases too strong, i.e. linear positional encoding.
3. Although a series of regularization and other techniques are proposed to solve the errors, no clear solution is proposed that works.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. It will be helpful to report errors as a function of difficulty. This can be done by presenting the errors as a function of sequence length (as done in Figure 3-right) or by also presenting errors as a function of different FFL tasks. If we define as $d$, the distance between the last write and the current read operation, is there a connection between the distribution of $d$ in the training set and the types of errors in the extrapolation set? How many examples of distance $d$ need to be presented for the model to generalize? For (possible) outliers $d$ in the training set, is the model fitting them correctly? There might be some interesting connections with Grokking [1, 2], where the model switches from a state of memorizing to one of generalizing.
2. Preposition 2 states that a 2-layer 1-head Transformer can represent FFL. Is there is practice any benefit with depth?
3. Other ways to decrease the entropy of the attention include the change of the temperature inside the softmax (which is not included in Proposition 3), or the method of [3].
Fixes:
Figure 3 (left) x-axis is cropped (50 -> 500), also middle-right (10->10K).
[1] Power, Alethea, et al. "Grokking: Generalization beyond overfitting on small algorithmic datasets." arXiv preprint arXiv:2201.02177 (2022).
[2] Thilak, Vimal, et al. "The slingshot mechanism: An empirical study of adaptive optimizers and the grokking phenomenon." arXiv preprint arXiv:2206.04817 (2022).
[3] Martins, André, et al. "Sparse and continuous attention mechanisms." Advances in Neural Information Processing Systems 33 (2020): 20989-21001.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper proposes a simple example to study failure in extrapolation of Transformer models. The authors identify the lack of correct inductive bias in the model as the major shortcoming. Although many different regularization techniques are proposed, no clear answer is given for this phenomenon. I believe that this will however steer the community towards the right direction, for answering fundamental questions what these models is concerned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the careful read and for the insightful comments! We are glad that you find the FFLM task to be well-motivated and helpful for steering the direction of the community towards fundamental concerns about the models. We hope to address your concerns below.
**[W1] Connection to hallucination and insights from Ji et al. 2023**
This is a great question. Please refer to [G1] in our global response.
**[W2] Some assumptions for theory are strong**
As our experiments highlight, the OOD distribution errors have a long-tail behavior and the OOD performance itself varies drastically across different random seeds, as well as, across different iterates in the same runs. This makes it highly non-trivial to develop a complete end-to-end theory for attention glitches. Our work takes the first step towards this by characterizing the impact of the self-attention head on the failure of OOD. We understand that the assumption on linear position encoding and focusing on a single attention-head is strong, but we believe the insights here are already useful in highlighting the challenge of FFLM and how simple fixes may not be enough to fix the OOD problem.
**[W3] No clear mitigation strategy is proposed.**
Please see [G3] in our global response.
**[Q1] Error as a function of “difficulty”; relation to grokking**
We have added an additional plot (please see Additional Figure 3 in the rebuttal pdf) highlighting the fraction of errors by dependency length. Note that the errors occur at both short and long dependency range. Plotting fine-grained OOD performance using richer metrics of the training set beyond the ignore token probability $p$ used to generate it is a great suggestion. Note that $p$ itself does characterize the distribution of distance between read and write for the training set (as long as we have enough samples), which is why we used this simple metric. We would be happy to add additional experiments looking more closely at the relationship of errors and training distribution. We agree that understanding how much of the OOD data of each type is necessary to generalize well is a very interesting question for future work.
Moreover, our results are not directly related to grokking, since the training is done in an online fashion (i.e. each batch of samples is freshly sampled) and hence there is no overfitting.
**[Q2] Practical benefit / Role of depth of Transformer**
Proposition 2 emphasizes that depth-2 is sufficient to represent the FFLM language. We believe depth-2 is necessary as well. In our experiments, we not only train Transformers with depth-$2$ but also depth-$\{4, 6, 8\}$. (We also provide a few runs at larger depths, up to $16$; see Figure 1 in the rebuttal attachment.) However, we do not see a significant correlation between depth and OOD performance in the $>10000$ training runs with various hyperparameters. Since depth adds more representation power, it might seem that it offers an overparameterization benefit. However, it also introduces entanglements with optimization dynamics, where the benefits/drawbacks of depth are not so clear. There has been some recent work [1] that highlights that bigger models might end up being less robust than smaller models. We believe that understanding the role of overparameterization would be an interesting direction for future work.
**[Q3] Attention sharpening by changing the temperature**
Following [2], we used a $\log(\tau t)$ temperature, where $\tau$ is the temperature parameter, and $t$ is the position (i.e. we use a different temperature for each position). However, this method was not able to help mitigate OOD errors, as shown in Figure 2 of the global response PDF.
**References**
[1] Miceli-Barone, A. V., Barez, F., Konstas, I., and Cohen, S. B. (2023). The larger they are, the harder they fail: Language models do not recognize identifier swaps in python. arXiv preprint arXiv: 2305.15507.
[2] David Chiang, Peter Cholak. Overcoming a Theoretical Limitation of Self-Attention.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional results, I believe they are very valuable.
The authors have address all my concerns. If I understand the "Additional Figure 3" correctly, the authors present the location of errors as a function of distance to the last read. As smaller distances are a lot more likely, perhaps it would make more sense to present the probability of error prediction as a function of that distance, i.e. how many errors were made for distance $d$, compared to how many instances of distance $d$ can be found.
Overall, I find this work and the FFLM fascinating and valuable to future research, as a simple yet challenging toy experiment. I believe authors have addressed concerns from all reviewers. To reflect this I have increased my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for the follow-up and the encouraging feedback!
**Clarification regarding the "errors vs. distance" plot:** Additional Figure 3 *does* normalize by the count (hence the label "fraction of errors"). You are correct that the distribution of denominators is extremely lopsided, due to the geometric distribution of dependency lengths; hence, the former plot (plotting counts instead of fractions) would be uninformative. (At the upper range, there may be undetected errors, due to rarity of such dependencies even under the FFL(0.98) distributions.) We'll clarify this upon merging into the manuscript. | Summary: This work studies the phenomenon of "attention glitches" in LLMs. Attention glitches are instances where an LLM's attention mechanism fails to capture long-range dependencies, resulting in factual inaccuracies or erroneous reasoning. The authors introduce a new synthetic benchmark called flip-flop language modeling (FFLM) to probe the extrapolative behavior of LLMs. FFLM is a simple generative task that requires an LLM to copy binary symbols over long-range dependencies, ignoring the tokens in between. The authors find that Transformer Transformer-based FFLMs suffer from a long tail of sporadic reasoning errors, even when the task is relatively simple.
Strengths: - The paper studies a critical problem on attention glitches for transformer-based models. It can potentially have a huge impact on the community.
- The synthetic FFLM benchmark can be useful to study long-range dependency in a controllable way.
- The findings/research questions provide more insights on the issue.
Weaknesses: - The paper discussed and aimed at drawing a connection between hallucination and long-range dependency, however, there is no further discussion on how these two terms interact with each other.
- While it is understandable that the paper studied the behavior of attention glitches, there is no proposed path to fix the issue which limits the contribution of this work.
- The setting is fully synthetic. It is unclear how any findings can be transferred into realistic and more complex tasks.
- Some experimental settings that are critical to the conclusion are unclear, e.g., dataset construction and sequence lengths, tokenizer choice, etc.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - The paper showed that 1-layer LSTM works perfectly for FF while transformers are not. However, it is unclear to me whether that is due to an effect of overfitting (to the distribution of training data). Could authors elaborate more on this?
- "We hypothesize that attention glitches occur in the internal algorithmic representations of Transformer models of natural language, and that they account for (a non-negligible portion of) the reasoning errors encountered in practice. " Is there a way to validate the hypothesis?
- It would be great if the authors can release the code to reproduce the results.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: - It should be made clear that the full experiments and analysis are based on synthetic setting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your careful review! We are glad that you find the attention glitches phenomenon to be a critical problem that is worth studying, and hope to address your concerns below.
For the first three concerns in Weakness, please refer to the global response:
- **[W1] Hallucination vs long-term dependency?** Please see [G1] in our global response.
- **[W2] Is there a clear fix?** Please refer to [G3] in our global response.
- **[W3] how do insights transfer to more complex / real-world setups?** Please refer to [G2] in our global response.
Below we would like to discuss the remaining questions.
**[W4] Details about experimental setups**
Attention glitches are found even as we vary the dataset, sequence length, and tokenization. Figure 3 (right) demonstrates that attention glitches become more frequent as sequence length increases, and that this phenomenon appears across multiple commonly studied LLMs. Many of these models have different tokenization strategies and training sets. We have provided experimental details in Appendix B; please let us know if there are particular setups that you are concerned about, and we are happy to clarify further.
**(Q1) Is Transformer failing because of overfitting?**
We interpret “overfitting to training distribution” as “overfitting the empirical distribution of training samples” (please let us know if we misinterpreted your question), which we do not think is the reason. Note that we train both LSTM and Transformers on online data where each batch is freshly sampled (as described in the detailed setups, i.e. Appendix B.2 – Training and evaluation data), which can be considered as training for 1 epoch and hence unlikely to overfit.
**(Q2) Connecting attention glitches to hallucination in natural languages?**
Please refer to [G2] in our global response.
**(Q3) Code release**
We plan to release the code upon publication, thank you for the suggestion!
**(L1) Clarify all experiments are synthetic**
We believe this is rather clear already, since we have explicitly stated this in multiple places:
- FFLM is stated as “a parametric family of synthetic benchmarks…” in the abstract (line 9), in the list of contributions at the end of the introduction (line 47), and in the conclusion section (line 292).
- Section 3.2 also clearly states that FFLM is synthetic and is dedicated toward justifying the importance of the benchmark.
- The beginning of Section 4 (line 162) states that the experiments are synthetic.
Please let us know if you have suggestions on better highlighting the setup, thank you very much!
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response and for addressing the reviewers' points.
We have pinged reviewer vbMc, and they will make sure to go over the rebuttal soon.
Your AC
---
Rebuttal Comment 1.2:
Comment: I'd like to thanks the authors for providing a very detailed explanation and I apologize for the late response. The responses particularly the general one helped clarify the paper a lot, and consequently I have raised my score to 6. Please do incorporate these points in the final paper.
---
Reply to Comment 1.2.1:
Comment: Thank you very much again for the time and effort you've dedicated to reviewing our paper! We really appreciate your suggestions and will make sure that they are reflected in the camera ready version. | Summary: This paper introduces Flip-Flop Language Modeling (FFLM), a synthetic benchmark designed to evaluate language model's (LMs) ability to perform operations on a single-bit of memory. LMs are evaluated on their ability to generalize to out-of-distribution (OOD) sequences. The training setups are varied along several axes including hyperparameters (random seed, training steps, weight decay, dropout), architectural changes (number of layers, hard vs. soft attention, positional encodings), and dataset properties (additional OOD data). In all cases, the LMs produce attention glitches, i.e., they exhibit errors in their reasoning. In contrast, an LSTM generalizes perfectly. The authors provide preliminary hypotheses explaining the failure modes.
Strengths: *Originality:* although synthetic benchmarks have been often studied in NLP, FFLM focuses on the smallest reasoning capability (memory on a single bit), reducing confounding factors driving mistakes.
*Quality:* the experiments are comprehensive and cover most of the "standard" training and architectural tricks. Moreover, the FFLM benchmark is well-designed and is appropriately justified as a benchmark. These two factors lead to a useful analysis of current limitations of LMs.
*Clarity:* the paper is well-written and figures + captions are self-explanatory
*Significance:* understanding reasoning errors in LMs is a crucial step towards improving their reliability and thus future deployment in real-world systems. Moreover, this work provides a minimal benchmark with which to evaluate any proposed architectural changes to LMs.
Weaknesses: The main weakness with the work is a lack of actionable insights. Although several mechanistic hypotheses are proposed as to explaining the behavior behind the benchmark, they are not adequately explored nor clearly organized. This leads to the following concerns:
**Evaluation concerns:**
* *Lack of experiments studying whether the proposed hypotheses in Section 4.1 actually drive the errors examined.* I appreciate the compilation of possible hypotheses explaining the LM errors. However, there don't appear to be experiments attempting to falsify these hypotheses? For example, one could examine whether the LMs do indeed only attend to the previous $n$-tokens.
**Clarity concerns:**
* *Does not emphasize the benchmark properties of FFLM.* Although the authors defined success on FFLM as achieving 100% performance accuracy, if people are to evaluate their architectural innovations on FFLM, it would be helpful to have a more continuous notion of progress (rather than solved vs. unsolved). Possible suggestions include:
* Defining a notion of difficultly of generalization. E.g., FFL(0.1) is harder than FFL(0.2) given training data from FFL(0.98). Perhaps one could introduce an easy and hard test set.
* Classifying severity of failures. Is there perhaps some metric that would estimate how close the LM attention patterns match either the ideal solution or any other possible solution? Intuitively, there should be some distinction between the attention weights of a randomly-initialized network and the attention weights of a trained model with error (Figure 5d). Perhaps one metric could be the proportion of attention weight placed on all the previously encountered write tokens vs. on all other tokens
* *Should have a clearer list of future directions.* I think it would help guide future work if there was a compiled list of actionable experiments and / or hypotheses to test that were provided in the appendix.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions:
1. Were there any other hypotheses explored in Section 4.1? It would be nice to have a more careful enumeration (perhaps by category) of the possible hypotheses driving the lack of OOD generalization. Moreover, one of the proposed mechanisms is already present in previous work.
Typos:
1. Line 867: "the regularization indeed have" --> "the regularization indeed has"
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Although there appears to be a limitations section in Appendix A.4, I think it would be appropriate to more clearly note in the main body that this work does not posit a full list of possible mechanisms driving poor performance on FFLM nor will solving performance on FFLM constitute a transformer with robust reasoning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful comments! We are glad that you found findings from our minimally sufficient FFLM task to be original and significant. There were some great points raised in the review, which we’d like to address below.
**Providing a list of actionable future directions**:
Thank you for the great suggestion! Here are our current thoughts:
- *Learning from synthetic setups*: Our results show that simple synthetic setups can serve as clean, controllable sandboxes for isolating and analyzing various mechanisms in the architecture. Designing benchmarks or probing tasks like FFLM could help better understand Transformer’s capabilities and limitations.
- *FFLM extensions and connecting to natural languages*: An obvious limitation of a synthetic setup is its gap to the real-world settings. One direction is to study whether and how findings can transfer to real-world setups; as an example, please see our response [A1] to Review 4Fu7 regarding the effect of data. Another direction is to expand the synthetic setup to narrow the gap to natural languages, for which we provided several directions in Section A.4, such as incorporating more complex selection criteria and expanded notion of ground truth.
- *More error analyses and (mechanistic) understanding*: While our paper provides explanations of two failure modes and some preliminary mechanistic study, a more comprehensive understanding of Transformer’s internal mechanism can be helpful. In contrast to single-component analyses in our work, such understanding under natural languages setups can be much more challenging and nuanced; for example, please see our discussion “Submodules” in Section A.4.
- *Better metrics*: as the reviewer pointed out, a continuous notion of progress can be much more informative than a binary answer or accuracy. One direction is to take inspiration from works on hidden progress throughout training, such as Barak et al. 22, Nanda et al. 23.
- *Architectural innovations*: Attention glitches is essentially a problem with the architecture: Transformer fails at FFLM, while LSTM succeeds easily. Therefore, architectural innovations are likely a necessary part of the fix. As discussed in Section 6, one idea is to integrate the recurrent inductive bias into the Transformer architecture, which several prior works have shown promising results [Katharopoulos et al. 2020, Dao et al. 2022, Peng et al. 2023].
**Better understanding the task and the failures**
As the reviewer correctly noted, the success metric of our proposed task is to process FFLM perfectly, which can be practically thought of as performing as well as recurrent networks (LSTMs). A continuous progress measure could be the accuracy, and it is a great question to find other metrics that could serve as more informative progress measures; please see the list of future directions above. Comparing attention weights is an interesting idea that is worth trying, though our current guess would be that it might not be directly helpful, since various prior work has shown that attention patterns can be misleading as discussed in Footnote 7 and Appendix B.5.
To better highlight the effects of various mitigation methods on the errors, we added Figure 1 in the global response PDF, which shows that each type of methods may be helpful for improving the OOD error on either the denser sequences or the sparser sequences, but no method was able to improve both simultaneously.
**Hypothesis driving lack of OOD performance**
We explored several potential hypotheses that lead to poor OOD performance (not explicitly organized in Section 4.1). Let us reiterate them here organized by the different aspects of the training pipeline:
- (Architecture) _H1: Small Transformers are not capable of representing the FFLM language perfectly_. We refuted this by showing a theoretical construction of a 2-layer (low parameter count & low-norm) Transformer that realizes FFLM with 100% accuracy.
- (Architecture) _H2: Soft attention can lead to dilution_. We supported this hypothesis by showing that attention sharpening indeed improves generalization across sparse sequences.
- (Architecture) _H3: Transformers can only focus on the recent $n$ tokens_: We show that this explanation is insufficient since the model makes errors even on dense sequences.
- (Data) _H4: OOD generalization requires coverage of tails_. We support this by showing that adding sparse and dense examples improves OOD performance.
- (Optimization) _H5: Bad local minima exist_. We support this in our interpretability studies by showing that even after enforcing hard attention, the optimization can find solutions that confidently choose the wrong argmax on certain sequences.
Note that none of the hypotheses fully explains all the failure modes we observe in our experimentation, which helps highlight the challenge of explaining OOD failures using simple mechanisms. Towards developing an end-to-end explanation, we would need to incorporate the interplay between different attention heads in each layer and across different layers, and the impact of optimization on the in-distribution loss on the OOD loss, which is highly non-trivial.
Thank you also for the suggestions on clarifying the limitations of setup and for correcting the typo; we will update the camera ready version accordingly.
---
Rebuttal Comment 1.1:
Title: Thank you for your details response.
Comment: I appreciate the proposed list of directions and hope they will be included in the paper. I would like to see this paper accepted but I am keeping my score.
---
Reply to Comment 1.1.1:
Comment: We will make sure to include these discussion in the camera ready version. Thank you very much again for your support and for the valuable suggestions! | Rebuttal 1:
Rebuttal: We thank all the reviewers for their thoughtful comments. In this global response, we address questions posed by multiple reviewers, and outline additional experiments we ran during the author response period.
**[G1] Gap between FFLM and natural languages:** We certainly agree that there are numerous discrepancies between FFLM vs. natural language modeling. We introduce FFLM not to act as a representative distribution for the many diverse capabilities needed to process natural language; rather, we use it to isolate *one* important capability where Transformers exhibit subtle errors. We reiterate the perspectives discussed in our paper:
* Instances of FFLM are embedded in distributions of natural language and code, as seen in Figure 2c. We see in practice that natural LLMs exhibit similar sporadic errors when prompted to complete such sequences (Figures 1 and 3 (top right)). Thus, robust FFLM processing is a necessary (but far from sufficient) condition for robust language processing.
* Shallow, parallel compositions of flip-flops can process a large class of formal languages known to be relevant in syntactic parsing and algorithmic reasoning [Liu et al. ‘23]. Thus, FFLM is an “atomic” unit of more complex sequence processing capabilities, and attention glitches may provide a way to tackle the notoriously hard problem of diagnosing the internal representations of LLMs. We are excited to tackle this in future work.
**[G2] Gap between attention glitches and hallucinations in practice:**
* We are only claiming that Transformers’ errors on flip-flop strings to be similar to a specific type of hallucination: namely, closed-domain hallucinations, where the model’s generations contradict unambiguously presented factual information provided in the context. Our intent is to provide a _minimal_ example which fulfills this criterion and reveals a shortcoming in the inductive bias of Transformers.
* We _do not_ claim to address (or even define) LLM hallucinations in their full scope; the full question of defining LLM factuality is ambiguous and philosophical in nature.
* As noted by reviewer ycg1, work by [Ji et al., 2023] suggests that LLM hallucinations broadly tend to improve with increasing depth, but here we find that deeper language models are not necessarily more resilient to attention glitches. Attention glitches are a particular kind of hallucination that Transformer architectures are susceptible to, and the architectural particulars of a Transformer-based model (like depth) do not remedy them. Our work, which considers various architectures, tokenizers, and regularizers, suggests that more invasive interventions are necessary to solve attention glitches.
**[G3] Regarding algorithmic fixes:** Reviewers ZZvs and ycg1 shared concerns about lack of algorithmic fixes. Rather than a weakness, we view this to be our work’s **central negative result**: we searched comprehensively for a clear fix, and did not find one (apart from changing the distribution or eschewing the Transformer).
While Section 5 documents a wide selection of algorithmic interventions that can quell the attention glitch pathology by orders of magnitude, it is true that we do not provide a solution on par with data diversity or the recurrent network. The goal of this paper is to highlight one kind of hallucination, and to emphasize that this is particular to the Transformer, regardless of its scale or specific architecture; recurrent predecessors have no such shortcoming. FFLM offers a precise probing mechanism for benchmarking progress on hallucinations of this kind, and further, helps to identify one concrete, disambiguated cause of the general LLM hallucination problem. We believe that advancing language models requires both thorough experimentation that identifies specific, classifiable kinds of hallucinations and honed tools like FFLM to study them closely — our work makes these contributions.
**[G4] Additional experiments and plots:** During this response period, we launched a few more sets of training runs, to address some of the reviewers’ curiosities. Figures are provided in the attachment.
* Dependence on scale: Figure 1 shows violin plots for o.o.d. errors across various model sizes, showing no clear trend (and no significant gain from increasing or decreasing model size). This corresponds to the hyperparameter grid sweep outlined in Figure 6 in the appendix of the original manuscript.
* Temperature: Prior work has suggested that sparsity of attention heads can be achieved by scaling the attention scores before the softmax by a constant (i.e. tuning the temperature). Our preliminary finding is that this does not improve extrapolation (Figure 2); note that direct attention sharpening *does* work (though not perfectly).
* Different number of states: Reviewers ZZvs and 4Fu7 both suggest variants of this experiment (which is also mentioned as a natural generalization axis in the paper); see our response to ZZvs for an explanation of the variants. Our preliminary findings: the long-tail errors persist; the 1-layer LSTM solves all of the task variants perfectly.
* Stratified errors by dependency length: in Figure 3, as requested, we exhibit a breakdown of error rates as a function of dependency length (distance from the current read token to the previous non-ignore token), on 3000 sequences from FFL(0.1) and 30K sequences from FFL(0.98)). This shows, in a finer-grained manner, that the errors are diverse in nature (not concentrated on any particular $n$-gram length), corroborating that this phenomenon evades oversimplified characterizations.
Pdf: /pdf/9573e7bf0eef0d8492dd27d747420bfac2af0f48.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents a paradigm for diagnosing one cause of closed-domain (intrinsic) hallucinations in large language models (LLMs) and presents its analysis results.
To understand why LLMs can be factually inaccurate or prone to erroneous reasoning, the authors proposed a flip-flop language model (FFLM) task designed to verify behavior during extrapolation. By controlling the probability of ignoring behavior, the FFLM can generate sequences with both dense-and-short and sparse-and-long dependencies.
This allows us to observe changes ofTransformer's learning outcomes and behavior between these sequences.
The analysis revealed that occasional, long-tail inference failures cause errors, and currently, their occurrence cannot be predicted. Some of the effects can be mitigated with regularization, but they cannot be completely eliminated.
The main claims are as follows:
- Proposing the flip-flop language modeling task. This task is designed to probe issues during the inference of long sequences easily.
- Transformers can learn flip-flop language, but regardless of dependency length, they occasionally cause long-tail errors.
- To mitigate this attention glitch, several regularization techniques are verified. No method can resolve the problem entirely, though each technique has a certain effect.
### after rebuttal
I increased my score from 5 to 6 based on the explanations provided in the feedback.
Strengths: - Proposal of FFLM, a controllable long-range dependency resolution task, under a simple task setting. The adjustability of the dependency range by the ignore action's probability is easy to understand.
- According to the authors, this is the first attempt to attribute model hallucinations to a systematic architectural flaw in the Transformers.
- Presenting various experimental results and possible explanations from many aspects. Unfortunately, none in this report are definitive causes and/or solutions (most are denied by exceptions), but they can be utilized as one of the initial investigation items in the analysis of natural language LM's hallucination.
- It's interesting that LSTMs and 1-layer single Transformers can solve some issues, but multi-layer LSTMs are not good at the issues. This suggests the necessity of architectural innovations beyond just stacking self-attention layers.
Weaknesses: - While the paper presents several interesting results to the readers, currently, the manuscript does not provide a clear guideline on what steps should be taken to counteract hallucinations in natural language LLMs. The impression I get from the current manuscript is that 'our current knowledge does not pinpoint the cause of hallucinations in natural language LLMs'."
- I found it interesting that LSTMs and 1-layer single Transformers sometimes performs better than the multi-layer LSTM. Why does this happen? I believe there must be important implications, so if I have overlooked the explanation, please point it out in the review answers.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors:
This is just a simple idea: Consider a unit with more memory bits. Would it be possible to create a unit that defines the ignore probability for each bit independently?
Is it possible that a multi-head attention can learn FFLM (with more bits) efficiently, by aligning the number of bits and heads?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Well discussed in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the thoughtful comments and great questions! We are glad that the reviewer finds it interesting that FFLM exposes the failure of multi-layer Transformers when smaller architectures can perform better, and hope to address the concerns below.
**Why LSTM / smaller Transformers perform better**
There might be some typos in the reviewer’s original question, and we would like to make sure our understanding is correct: we interpret the question as: “why can 1-layer LSTM and 2-layer Transformers perform better than multi-layer Transformers?”
Representationally, both 1-layer LSTM and 2-layer Transformers are minimally sufficient to solve the task. Comparing these two, we hypothesize that the LSTM has a recurrent inductive bias that is more suitable for the FFLM task, which is consistent with prior works aiming to solve some reasoning tasks via 'recurrent prompting' [1,2,3] or by introducing recurrence into the architecture directly [4,5,6].
Comparing 2-layer and multi-layer Transformers, the latter performs worse likely due to the redundancy in the architecture, which allows more solutions that are consistent with the training samples but can have more unexpected behaviors on unseen samples.
**This is a comprehensive negative result paper**
While Section 5 documents a wide selection of algorithmic interventions that can quell the attention glitch pathology by orders of magnitude, it is true that we do not provide a solution on par with data diversity or the recurrent network. The goal of this paper is to highlight one specific kind of hallucination, and to emphasize that this is particular to the Transformer, regardless of its scale or specific architecture; recurrent predecessors have no such shortcoming.
FFLM offers a precise probing mechanism for benchmarking progress on hallucinations of this kind, and further, helps to identify one concrete, disambiguated cause of the general LLM hallucination problem. We believe that advancing language models requires both thorough experimentation that identifies specific, classifiable kinds of hallucinations and honed tools like FFLM to study them closely — our work makes these contributions.
**Generalizations beyond 1-bit memory.**
There are several ways to generalize the FFLM, as we discussed in Appendix A.4. Below are some options most related to generalizing the memory:
- Keeping the instruction set to be the same, and using a larger set of values. For example, two isomorphic ways to write a sequence could be “$w\ 3\ i\ 2\ i\ 1\ r\ 3$” (“4 symbols”) and “$w\ 1\ 1\ i\ 1\ 0\ i\ 0\ 1\ r\ 1\ 1$” (“2*2 symbols”). Figure 2 in the global rebuttal attachment shows that errors persist (and, in fact, both long- and short-range error rates worsen) with increasing vocabulary size.
- Keeping the set of values to be the same, and using a larger instruction set: for example, when there are 2 memory units (indexed as 0 and 1), a sequence could be “$w_0\ 0 \ w_1\ 1\ r_1\ 1\ r_0\ 0$” where $w_i$ ($r_i$) refers to write at (read from) memory bit $i$. This has more flexibility in the probability hyperparameters and can be solved using a construction similar to the one given in Proposition 2, but extended to using more heads. This is analogous to interleaving multiple FFLM tasks and slightly tangential to our current investigation, and we are happy to include more results in the camera-ready.
**References**
[1] Maxwell Nye et al. Show Your Work: Scratchpads for Intermediate Computation with Language Models.
[2] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.
[3] Bingbin Liu, Jordan T. Ash, Surbhi Goel, Akshay Krishnamurthy, Cyril Zhang. Transformers Learn Shortcuts to Automata.
[4] Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, François Fleuret. Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention.
[5] Daniel Y. Fu, Tri Dao, Khaled K. Saab, Armin W. Thomas, Atri Rudra, Christopher Ré. Hungry Hungry Hippos: Towards Language Modeling with State Space Models.
[6] Bo Peng et al. RWKV: Reinventing RNNs for the Transformer Era.
---
Rebuttal Comment 1.1:
Title: Thank you for the clarifications
Comment: Authors,
thank you for your feedback! (and excuse me for typos...)
Concerning "Why LSTM / smaller Transformers perform better": Yes, that is what I want to ask. The provided explanation sounds convincing to me. Thanks!
I find the answers to my questions and fellow reviewers' questions are overall convincing and satisfactory.
I'm leaning towards a more positive attitude, so I will increase my score to 6.
---
Reply to Comment 1.1.1:
Comment: We are glad that our response addressed your concerns. Once again, thank you very much for the insightful discussions and support! | null | null | null | null | null | null |
Double Pessimism is Provably Efficient for Distributionally Robust Offline Reinforcement Learning: Generic Algorithm and Robust Partial Coverage | Accept (poster) | Summary: This paper proposed a unified framework of Robust Markov Decision Process, which includes many newly proposed models as special cases. Under their generic framework, they proposed Doubly Pessimistic Model-based Policy Optimization (P^2MPO) that adopts a double pessimism principle for policy optimization. They showed that the suboptimality gap of the proposed algorithm can be upper bounded by the model estimation error and the robust partial coverage coefficient. They further provided concrete implementations of their algorithm on some specific models and showed that their algorithm enjoys a $n^{-1/2}$ convergence rate with $n$ being the number of trajectories in the offline dataset.
Strengths: - They proposed a generic framework that covers many models such as $\mathcal{S}\times\mathcal{A}$-rectangular tabular RMDPs, $\mathcal{S}\times\mathcal{A}$-rectangular kernel RMDPs, $\mathcal{S}\times\mathcal{A}$-rectangular neural RMDPs, $\mathcal{S}\times\mathcal{A}$-rectangular factored RMDPs.
- Instead of covering the visitation distribution of any policy, they only require a partial coverage style assumption, i.e., the dataset covers the visitation distribution of the optimal policy in a robust fashion.
Weaknesses: - Computationally inefficient, i.e., not sure how to solve (4.1), (4.2) and (3.2) efficiently.
- Though the authors showed how the model estimation step is implemented on some specific models, it is not clear how to implement the model estimation step that satisfies Condition 3.1 and Condition 3.2 in general.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Why the robust partial coverage assumption (which considers all the possible transition kernels in a robust set) is much weaker than assuming the dataset covers the visitation distribution of any possible policy (under only the true transition kernel)? In other words, does the full-coverage-style assumption implies the partial-coverage-style assumption?
- How does the rates obtained in Section 4 Corollary 4.1, 4.3 compare with the previous works?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer Ty5v**
Thanks very much for your appreciation of our work! In the following, we will try our best to address all your concerns and questions.
**Q1: The algorithm is computationally inefficient, i.e., not sure how to solve (4.1), (4.2) and (3.2) efficiently.**
**A1:** We clarify that our work focuses on the statistical side of robust offline RL under general function approximations, which remains an important open problem. Our algorithm is therefore information-theoretic and is indeed intractable with an abstract function approximation class. The objective we designed is to achieve statistical efficiency in the most general setup. Computational efficiency is not our focus.
Meanwhile, if we specify the model space, for example, tabular case or linear/kernel function classes, we can design approximations of our doubly pessimistic algorithm. For example in tabular RMDPs, we can replace the iterative infimum in our double pessimism objective (3.2) by an LCB-style bonus. This has been shown to be efficient both theoretically and experimentally by [1], which can demonstrate the efficiency and practicality of our general algorithm design.
**Q2: Though the authors showed how the model estimation step is implemented on some specific models, it is not clear how to implement the model estimation step that satisfies Condition 3.1 and Condition 3.2 in general.**
**A2:** Conditions 3.1 and 3.2 involved in our unified theory are guidelines for customizing the model estimation subroutine for different RMDP examples, which shows the flexibility of our algorithm and theory. Actually, different RMDP examples have different structures. By tailoring the model estimation step for specific RMDP examples, one can utilize their structures to the best and obtain better sample efficiency.
For example, for $\mathcal{S}\times\mathcal{A}$-rectangular robust factored MDP (Example 2.9) with finite states and actions, our proposed model estimation subroutine (Section 4.2) enjoys better sample efficiency than naively applying the model estimation step designed for general $\mathcal{S}\times\mathcal{A}$-rectangular robust tabular MDPs (Example 2.5 in Section 4.1). This is also the case for $d$-rectangular robust linear MDPs (Appendix B).
In general, an RMDP instance could have a very complicated and detailed structure. Designing an once-for-all model estimation subroutine would lose certain information which may help to boost the sample-efficiency. Therefore, implementing the model estimation step that satisfies Conditions 3.1 and 3.2 in general is not our focus.
**Q3: Why the robust partial coverage assumption (which considers all the possible transition kernels in a robust set) is much weaker than assuming the dataset covers the visitation distribution of any possible policy (under only the true transition kernel)? In other words, does the full-coverage-style assumption implies the partial-coverage-style assumption?**
**A3:** Thanks for pointing that out! We clarify this point in the following.
In our paper (Line 262 to 264) we mentioned two types of coverage condition: (i) the distribution of the data is uniformly lower bounded; (ii) covering the visitation distribution of any $\pi\in\Pi$. It seems that we did not elaborate under what transition kernels (ii) holds. By (ii), we meant to refer to the robust-full-coverage-style assumption which holds for all $\pi\in\Pi$ and $P\in\boldsymbol{\Phi}(P^{\star})$, parallel to our robust-partial-coverage-style assumption in terms of transition kernels. When considering the full-coverage-style assumption you mentioned, it is important to note that a direct implication between it and our robust-partial-coverage-style assumption cannot be established. However, as is shown by an information-theoretical lower bound by [1] for the tabular RMDP case, the robust-partial-coverage-style assumption is the "minimal" assumption on the offline data to some extent.
Meanwhile, we remark that all previous full-coverage-style offline setup papers use (i) as their data assumption. Since (i) requires that the distribution of the state-action pairs are uniformly lower bounded, this kind of full-coverage assumption is definitely much stronger than our robust partial coverage assumption.
**Q4: How does the rates obtained in Section 4 Corollary 4.1 & 4.3 compare with the previous works?**
**A4:** We have compared our results in Corrolary 4.1 with existing work [1] in Remark 4.2. Regarding our results for robust factored MDPs in Corollary 4.3, their is no existing work to compare with. This novel model is proposed by our work, and our algorithm stands as the first efficient algorithm designed for this problem.
**References:**
[1] Shi, Laixi, and Yuejie Chi. "Distributionally robust model-based offline reinforcement learning with near-optimal sample complexity." arXiv preprint arXiv:2208.05767 (2022).
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I have read the rebuttal and all the other reviews. I think the paper has novel contributions (i.e., double pessimism, P^2MPO) but my concern is still the computational issue. Thus I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you much very for your appreciation of the contributions and novelties of our work! We will keep improving the paper following your questions and suggestions in the revision. | Summary: This study focuses on distributionally robust offline reinforcement learning to discover an optimal robust policy using an offline dataset for effective performance in perturbed environments. A novel algorithm framework called Doubly Pessimistic Model-based Policy Optimization (P2MPO) is proposed. P2MPO combines a flexible model estimation subroutine with a doubly pessimistic policy optimization step, leveraging the double pessimism principle to address challenges arising from behavior-policy mismatch and model perturbations. The study demonstrates the sample efficiency of P2MPO with robust partial coverage data and highlights its convergence rate. It introduces the concept of double pessimism as a general learning principle for robust offline RL, showcasing its efficiency in kernel and neural function approximators.
Strengths: I think this work really pushes the robust RL community research efforts further by answering:
> can we design a generic algorithm for robust offline RL in the context of function approximation?
The main contribution of double pessimism: one for robustness of dynamics uncertainty and one for model-estimation using offline data is a really nice idea worthy for publication at NeurIPS.
Weaknesses: I have only minor weaknesses for this work as follows:
1. The robust Bellman equation for d-rectangular sets are not formally proven. The proof isn't there even in [Ma et al. 22] to the extent of my search. The current Appendix C just rewrites [Iyengar 05] proof. Maybe replace it by the proof for d-rectangular robust Bellman equation?
2. The robust partial concentrability dependence on the robust set $\Phi$: Is it tight? Is there a lower bound for this setting? The reason for this question is coming from the fact that the robust solution is looking for $\min$ over the robust set $\Phi$.
3. Honestly, Section B.2 can be expanded further. I found the following statement vague in its current form:
> our algorithm framework is unable to deal with this kind of rectangular robust sets in the context of partial coverage data due to some technical problems in applying the partial coverage coefficient (Assumption 3.3) under this kind of robust sets.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: please see weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Regarding S-rectangular sets, please see weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer F7Eb**
Thanks so much for your appreciation of our work! We will keep improving our paper following your suggestions. In the following, we address all your concerns and questions.
**Q1: The robust Bellman equation for $d$-rectangular sets are not formally proven. The proof isn't there even in [1] to the extent of my search. The current Appendix C just rewrites [2] proof. Maybe replace it by the proof for d-rectangular robust Bellman equation?**
**A1:** Yes, we agree with your comments that currently a strict proof of the robust Bellman equation for $d$-rectangular linear MDPs proposed by [1] is still missing. Actually we can prove it following a similar argument as in $\mathcal{S}\times\mathcal{A}$-robust set case. Thanks for your suggestions and we would consider adding its proof in the revision to be self-content.
**Q2: regarding the robust partial concentrability dependence on the robust set $\boldsymbol{\Phi}$, is it tight? Is there a lower bound for this setting? The reason for this question is coming from the fact that the robust solution is looking for min over the robust set $\boldsymbol{\Phi}$.**
**A2:** We think the robust concentrability dependence on the robust set is tight. For the robust single-policy clipped concentrability $C_{\mathrm{rob}}^\star$ defined in [3], we can show that $\sqrt{C_{P^\star, \boldsymbol{\Phi}}^\star} \le C_{\mathrm{rob}}^\star.$ Together with the lower bound $\Omega(C_{\mathrm{rob}}^\star/\varepsilon^2)$ in [3], we know our robust partial coverage coefficient $C_{P^\star, \boldsymbol{\Phi}}^\star$ characterizes the statistical limit of robust offline RL. Here we omit other parameters such as the horizon length $H$ in the lower bound and $\varepsilon$ is the accuracy of the desired policy.
**Q3: Honestly, Section B.2 can be expanded further. I found the following statement vague in its current form: "our algorithm framework is unable to deal with this kind of rectangular robust sets in the context of partial coverage data due to some technical problems in applying the partial coverage coefficient (Assumption 3.3) under this kind of robust sets."**
**A3:** Thanks for your suggestion! We will make the statement clearer in our revision. In the following we briefly explain the technical problems we met. Intuitively, for $\mathcal{S}$-rectangular RMDPs, it actually obeys another form of robust Bellman equation (RBE):
$$
V_{h,P,\boldsymbol{\Phi}}^{\pi}(s_h) = \mathbb{E}_{a_h\sim \pi_h(\cdot|s_h)}[R_h(s_h,a_h)] + \inf\_{\widetilde{P}_h\in\boldsymbol{\Phi} (P_h)} \mathbb{E}\_{a_h\sim \pi_h(\cdot|s_h), s'\sim \widetilde{P}_h(\cdot|s_h,a_h)}[V\_{h+1,P,\boldsymbol{\Phi}}^{\pi}(s')]
$$
($\mathcal{S}\times\mathcal{A}$-rectangular RMDPs also satisfy this form of RBE, but $\mathcal{S}$-rectangular RMDPs only satisfy this form of RBE). However, this form of RBE would not give the same suboptimality decomposition as we did by our proof techniques (Eqn. (D.8) in Appendix D), which is key to apply the robust partial coverage condition adopted by our paper (Assumption 3.3). Therefore, currently we are not sure of whether or not it is actually possible to include $\mathcal{S}$-rectangular RMDPs to our theoretical framework. It is an interesting future work for us to figure this out.
**References:**
[1] Ma, Xiaoteng, et al. "Distributionally robust offline reinforcement learning with linear function approximation." arXiv preprint arXiv:2209.06620 (2022).
[2] Iyengar, Garud N. "Robust dynamic programming." Mathematics of Operations Research 30.2 (2005): 257-280.
[3] Laixi Shi and Yuejie Chi. Distributionally robust model-based offline reinforcement learning with near-optimal sample complexity. arXiv preprint arXiv:2208.05767, 2022.
---
Rebuttal Comment 1.1:
Comment: The rebuttal addressed my concerns. I think my current rating considering the rebuttal and other reviewers’ concerns still stands correct, and I am positive for its publication.
---
Reply to Comment 1.1.1:
Comment: Thanks so much for your efforts reviewing our paper and your positive feedbacks! We will further improve our paper following your suggestions during revision. | Summary: This paper studies distributionally robust offline reinforcement learning. They propose a general learning principle, double pessimism, as well as a generic algorithm framework P$^2$MPO for robust offline RL, and show that it is provably efficient in the context of general function approximation.
Strengths: The paper is well-written and theoretically solid. It provides several novel contributions to the field of distributionally robust MDP.
First, it proposes a general learning principle, double pessimism, together with a generic algorithm framework P$^2$MPO and a unified theoretical analysis.
Second, it proposes several novel structures of uncertainty set and discusses their qualities under three commonly used rectangularity assumptions. Under these structures, they solve the open problem of learning robust offline RL in the context of general function approximation.
Weaknesses: Some minor issue of typos.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. On line 100, is the policy a function from $\mathcal{S}$ to $\Delta(A)$?
2. How strong are assumption D.1 and E.1, are they reasonable?
3. Check inequality (E.23) '$0\leq \lambda_iH$'.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer otjP**
Thanks you so much for your appreciation of our work! We will keep improving our paper following your feedbacks. In the following, we address all your concerns and questions.
**Q1: Some minor typos: i) On line 100, is the policy a function from $\mathcal{S}$ to $\Delta(\mathcal{A})$? ii) Check inequality (E.23) $0\leq \lambda_i H$.**
**A1:** Thanks so much for pointing those out! We will correct them in the revision.
**Q2: How strong are assumption D.1 and E.1, are they reasonable?**
**A2:** You mean the assumptions on the lower bound of dual variables (Assumption E.3 and F.2)? It's worth mentioning that the adoption of this assumption in problems with the KL robust set is consistent with previous works (e.g., [1]). From our perspective, removing this assumption in the context of RMDPs with function approximation, without introducing additional assumptions, presents a challenge due to the inherent nature of KL-divergence. In our work, we also investigate robust RL with the TV robust set, which is a standard and significant setting. For robust RL with TV robust set, we do NOT need the regular assumption like Assumption E.3 and F.2, and the final suboptimality gap is *polynomial in all parameters*.
**References:**
[1] Ma, Xiaoteng, et al. "Distributionally robust offline reinforcement learning with linear function approximation." arXiv preprint arXiv:2209.06620 (2022).
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my questions and I will keep my score. | Summary: This paper proposed a generic framework to study distributionally robust offline reinforcement learning problems, which included a model estimation step and a robust policy optimization step. Previous works in the literature usually assume finite state action spaces; this framework can incorporate function approximations and ultimately paved the way of tackling large state action spaces. Additionally, two specific model estimation approaches are introduced and studied in details; corresponding sample complexity results are provided.
Strengths: This paper explicitly considered the two sources of uncertainties in distirbutionally robust RL problems. The first one is the difference between the estimated training model using finite samples and the true training model; the second one is the difference between the true training model and the testing model. Current literature usually assume that the testing model is within a radius of the estimated training model. The double pessimistic idea is new. Additionally, only partial coverage offline dataset is required, while many previous works require observing all state action pairs in the dataset.
This paper also provided an interesting sub-optimality theorem that decomposes the sub-optimality gap into a model estimation error part and a data coverage part, which could be used for future developments.
The problem setting, definitions and notations are clear and easy to understand.
Weaknesses: The paper is somewhat hard to follow. A lot of examples that can be studied under the proposed framework are presented in the main paper. I think it is better to present one example in details in the main paper, move the others to the appendix and put relevant literature review in the main paper. Similarly, in the model estimation parts, one example should be enough.
The motivation of considering the double pessimism is not clear to me. Two uncertainty sets are introduced, the model uncertainty set, for example, is controlled by $\epsilon$ when using MLE estimator and the distribution shift robust uncertainty set is controlled by $\rho$. It is not clear to me whether the two phase uncertainty sets are necessary as in general we don't know $\rho$. In practice, people may use similar datasets that include both training and testing sets to estimate (guess) the uncertainty set radius, which is a single phase approach and directly captures the difference between the finite training set and the testing set. More rationales or examples should be provided to motivate the two-step approach. For example, 1) the authors could provide applications that singles-step uncertainty set cannot be easily estimated while the two-step approach is more applicable in practice, 2) the author could theoretically show that this two-step approach can avoid conservativeness under certain conditions.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. The underscore $\lambda$ and $C_1$ in the Corollary 4.1 is not defined in the main paper. They are well-defined in the appendix; I think it is better to at least provided the meaning of underscore $\lambda$ here. In addition, is it possible to avoid the assumption of the lower bound on the dual variable $\lambda$? As you consider the two-step uncertainty set, I think you could provide sub-optimality gap in terms of $\epsilon$ and $\rho$ in Section 4 and discuss the choices of them as well.
2. Some distributionally robust RL papers that adopt KL-divengence as the uncertainty set measure have radius square in the SubOpt term, e.g., [37] [64] in the papers you cited, while your SubOpt does not suffer from the radius square term. Can you comment on this? I think this could be a big benefit when selecting small $\rho$s.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: This is a theoretical work at this stage and thus no potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed review and the meaningful suggestions! In the following, we will try our best to address all your concerns and questions.
**Q1: It is better to present one example in details in the main paper, move the others to the appendix and put relevant literature review in the main paper. Similarly for the model estimation parts.**
**A1:** Thanks for pointing out! We will improve the presentation following your suggestions.
**Q2: The motivation of double pessimism is not clear. It's unknown whether the two phase uncertainty sets are necessary as in general we don't know $\rho$. ···**
**A2:** Thanks for your questions. In the following, we will explain more about the motivation and necessity of the double pessimism principle.
The theoretical motivation to consider performing pessimism twice is that we want to handle the two sources of distributional shifts in robust offline RL: (i) the mismatch between the behavior policy and the target policies to be learned; (ii) the mismatch between the training environment and the testing environment. Thus our approach is to perform pessimism in the face of *model estimation uncertainty* and *test environment uncertainty* simultaneously. The model estimation uncertainty originates from statistical estimation of the training environment transition kernel $P^{\star}$ under the mismatch between the state-action distributions induced by the behavior policy and the target policies. The test environment uncertainty comes from the the mismatch between the environments for training and testing. Since the test environemt is in a robust set centered at the training environment $P^{\star}$, these two kinds of uncertainties are coupled with each other and a two-step-style pessimism approach is developed.
Theoretically, our double pessimism approach can reduce the requirement of the training data to the minimum: the robust partial coverage condition (Assumption 3.3), i.e., only covering the distributions induced by the optimal policy and the transition kernels in the robust set of the nominal transition kernel, which is much weaker than full-coverage-style conditions like a uniformly lower bounded data distribution. This is impossible without the double pessimism approach.
Regarding your concern that $\rho$ might be unknown, we admit that in practice this could be the case. In that case, the robust parameter can either serve as a tuning parameter balancing between the robustness of and the performance of the learned policy, or simply be chosen by experts or priors. Still, in our theoretical work we assume a known $\rho$, which is commonly adopted by the large body of researches on robust RL.
Finally, we clarify that our work focuses on the offline setup with no access to the test environment data, which is often the case in practice. E.g., in robotics, people may have no access to the exact place where the robotics they trained will be deployed. Thus no test data are available. It's an interesting future work if the learner is provided with an extra data related to the test environment.
**Q3: The underscore $\underline{\lambda}$ and $C_1$ in the Corollary 4.1 is not defined in the main paper. I think it is better to provide the meaning of underscore $\underline{λ}$ here. In addition, is it possible to avoid the assumption of the lower bound on the dual variable $\underline{\lambda}$?**
**A3:** We have mentioned in Line 293 that $C_1$ is an absolute constant. Furthermore, we acknowledge the importance of clarifying the meaning of $\underline{\lambda}$ in Corollary 4.1. In the revision, we will provide an explanation that $\underline{\lambda}$ represents the lower bound of the dual variables of some DRO problems.
We want to emphasize that this issue is incurred by the KL robust set. In addition to this setting, we also study robust RL with TV robust set, which is also a standard and important setting. For robust RL with TV robust set, we do NOT need the regular assumption like Assumption E.3, and the final suboptimality gap is *polynomial in all parameters*. It's worth mentioning that the adoption of this assumption in problems with the KL robust set is consistent with previous works (e.g., [45]). From our perspective, removing this assumption in the context of RMDPs with function approximation, without introducing additional assumptions, presents a challenge due to the inherent nature of KL-divergence.
**Q4: As you consider the two-step uncertainty set, I think you could provide sub-optimality gap in terms of $\epsilon$ and $\rho$ in Section 4 and discuss the choices of them as well.**
**A4:** We note that the conclusions in Section 4 are actually in terms of the $\epsilon$ and $\rho$ you mentioned. The parameter $\xi$ in Corollary 4.1 & 4.2 corresponds to $\epsilon$. Therefore, for KL-divergence robust sets, the suboptimality gap is of order $\mathcal{O}(\xi\cdot\rho^{-1})$, while for the TV-distance robust sets the suboptimality gap is of order $\mathcal{O}(\xi)$, i.e., $\rho$-independent. The choice of $\xi$ is obtained from statistical analysis of the estimation of the nominal transition kernel, while $\rho$ is fixed by the RMDP problem instance we are considering. Thanks for pointing that out and we will add discussions.
**Q5: Some DRRL papers with KL-divengence have radius square in the SubOpt term, e.g., [37] [64] you cited, while your SubOpt does not suffer from the radius square term. Can you comment on this?**
**A5:** It seems that both [37] and [64] have Suboptimality scaling with $\mathcal{O}(\rho^{-1})$. Their results are presented in the form of sample complexity $N_{\mathrm{KL}} = \mathcal{O}(\epsilon^{-2}\rho^{-2})$, which means that to obtain $\epsilon$-optimal robust polciy, $N_{\mathrm{KL}}$ samples are needed. Converting it to the language of SubOpt, this gives an $\mathcal{O}(\rho^{-1}\cdot N_{\mathrm{KL}}^{-1/2})$ suboptimality. Thus, in the tabular case, our dependence on $\rho$ conincides with [37] and [64].
---
Rebuttal Comment 1.1:
Comment: The authors have addressed all my concerned and I raised the final score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your valuable feedbacks and updating your score! We will keep improving our work following your suggestions in the revision. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the offline robust RL problem. A double pessimism approach is proposed and studied.
Strengths: 1. The approach is novel and new compared to previous offline robust RL ones.
2. The approach can be used for large-scale problems.
3. The theoretical analysis is comprehensive.
Weaknesses: 1. The model the authors proposed, seems hard to solve. With no robustness consider, the model reduces to the one in [Masatoshi Uehara and Wen Sun, 2022]. The model in the non-robust is hard to solve and becomes hard together with robustness.
2. Compared to [Masatoshi Uehara and Wen Sun, 2022], the contribution seems a little bit incremental to me. The results and approaches are not surprising, in aspects of approach designing and error bound analysis.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Do you have some efficient approach to solve the model you proposed?
2. The analysis seems similar to the ones in [Masatoshi Uehara and Wen Sun, 2022], can you highlight the novelty and contribution?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See parts above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer NCKb**
Thanks for your review and the feedback. We will try our best to address all your concerns and questions in the following.
**Q1: The model the authors proposed, seems hard to solve. Do you have some efficient approach to solve the model you proposed?**
**A1:** We clarify that our work focuses on the statistical side of robust offline RL under general function approximations, which remains an important open problem. Our algorithm is therefore information-theoretic and is indeed intractable with an abstract function approximation class. The objective we designed is to achieve statistical efficiency in the most general setup. Computational efficiency is not our focus.
Meanwhile, if we specify the model space, for example, tabular case or linear/kernel function classes, we can design approximations of our doubly pessimistic algorithm. For example in tabular RMDPs, we can replace the iterative infimum in our double pessimism objective by an LCB-style bonus. This has been shown to be efficient both theoretically and experimentally by [1], which can demonstrate the efficiency and practicality of our general algorithm design. For more complicated deep RL setup, a promising approach is to extend the algorithm proposed by [2]. It implements the model-based pessimistic offline RL algorithm for non-robust MDPs, which iterates between an agent update step (corresponding to $\sup_{\pi\in\Pi}$) and an adversarial model update step (corresponding to $\inf_{P\in\widehat{\mathcal{P}}}$). Adapting the adversarial model update step to our doubly pessimistic value estimator can serve as an approximate implementation of the algorithm proposed by our work.
**Q2: Compared to [3], the contribution in terms of approach designing and error bound analysis seems a little bit incremental. Can you highlight the novelty and contribution?**
**A2:** We respectfully disagree that our work contributes incrementally compared to [3]. Essentially, our work is on distributionally robust offline reinforcement learning, a different problem setup from [3], with its own distinct challenges in terms of algorithmic design and theoretical analysis. And to the best of our knowledge, our work is the first to propose a provably sample-efficient algorithm for distributionally robust offline RL in the context of general function approximation.
In the following, we compare our work with [3] in more detail.
- **Approach designing.** In robust offline RL, there exist two sources of distributional shifts which are coupled with each other: (i) the mismatch between the behavior policy and the target policies to be learned; and (ii) the mismatch between the nominal environment and the perturbed environment. The latter is a unique challenge that is not presented in non-robust offline RL [3]. Therefore, it remains unknown how to design sample-efficient algorithms that can provably tackle these two types of shifts under general function approximation. With this in mind, our approach features a noval algorithmic design principle named "double pessimism", which performs pessimistic model selection in the face of both kinds of distributional shifts *simultaneously*. This is essentially different from [3]. Our work is the first to identify such a new algorithmic design principle for robust offline RL with general function approximation.
- **Error bound analysis.** When there are coupled shifts (i) and (ii), the theoretical analysis of [3] would also fail. In fact, our analysis is based on a different framework from [3], in terms of analyzing:
- *Pessimism in the face of two souces of distributional shifts*: These calls for new analysis techniques for error decomposition and analysis of pessimism (Appendix D). Also, our analysis is based on the notion of robust partial coverage coefficient (Assumption 3.3) which is customized for robust RL. This is different from using the standard partial coverage coefficient for MDPs [3], requiring new analysis techniques.
- *Model estimation error analysis coupled with distributional shifts:* Compared with standard offline RL analysis of transition kernel estimation [3], this requires a delicate application and analysis of the dual representations for distributionally robust objectives (Appendices E and F). Under our unified analysis framework, we customize different model estimation subroutines and their corresponding analysis for different kinds of RMDPs. Also, we highlight that our work studies several new examples of RMDPs, e.g., factored RMDPs, and their model estimation analysis are completely new.
**References:**
[1] Shi, Laixi, and Yuejie Chi. "Distributionally robust model-based offline reinforcement learning with near-optimal sample complexity." arXiv preprint arXiv:2208.05767 (2022).
[2] Rigter, Marc, Bruno Lacerda, and Nick Hawes. "Rambo-rl: Robust adversarial model-based offline reinforcement learning." Advances in neural information processing systems 35 (2022): 16082-16097.
[3] Uehara, Masatoshi, and Wen Sun. "Pessimistic Model-based Offline Reinforcement Learning under Partial Coverage." International Conference on Learning Representations. 2022.
---
Rebuttal Comment 1.1:
Comment: I am acknowledging I have read your arguments and maintain my rating.
---
Rebuttal 2:
Comment: Dear Reviewer,
Please reply to the authors' rebuttal and ask any clarifying questions if you need.
Thanks,
Your AC | null | null | null | null | null | null |
VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks | Accept (poster) | Summary: The paper presents a new vision-language model where tasks are specified via a language interface instead of being “hard-coded” into the architecture. The model relies on a pretrained decoder model to simply output the answer as text. E.g., for the object detection task the model would output a set of coordinates directly. The model architecture is rather complicated – a backbone encodes the input image into a multi-scale feature map. The language information is encoded by a Bert model and injected into the multi-scale feature map via cross-attent. Then a DETR model produces a set of visual “tokens” from the feature map. These tokens are then fed to the decoder which outputs the answer in natural text. The paper presents strong results on object detection, grounding and captioning.
Strengths: * The paper provides very strong empirical results. Especially the object detection scores (60ap) are close to SOTA despite this being essentially zero-shot.
* The paper is relatively easy to train -- it relies on LORA and pre-trained models. This and the open-sourced code should make the method available to other researchers.
* Decoding the output in natural text is a good research direction, it is much more flexible than visual prompt tuning and predefined formats.
Weaknesses:
* The method is rather complicated. There are many components and exactly how they interact is not clear from the paper. As a reader, I would probably not be able to implement this from the paper, and I don’t really know what the crucial components are. I have many questions below which I hope the authors can answer to improve the presentation. E.g. the output-format-as-decoding method is not clearly described.
* There are also VL tasks (e.g. VQA) which I think should be added to the main paper. Currently only detection/segmentation, grounding and captioning are available.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Could you provide pseudo-code for your implementations? E.g. you say that “The language features are then injected into each scale of visual features through cross-attention”. Knowing exactly how the cross-attention is implemented would be good.
2. What is output-format-as-decoding? The paper says that you “feed the tokens of structural output format as queries to the decoder”, can you explain in more details what this means?
3. The paper says “except a few LoRA parameters” – can you specify how many?
4. Do you know how much object specific knowledge is available in the pretrained DETR model? Is it possible to do an ablation here? I assume the DETR model has been pretrained on e.g. COCO detection, so it’s maybe not really a zero-shot task for the model?
5. Why is two-stage training needed? Are there ablations showing the effects of this?
6. Are the resnet parameters pretrained? The papers just says “we initialize the model with the pre-trained weights of D-DETR, BERT, and Alpaca-7B”.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: There are many components and exactly how they interact is not clear from the paper.**
**A1:** There are only two main components in the VisionLLM: language-guided image tokenizer and LLM-based decoder, each of which has specific designs for open-ended tasks. We kindly invite the reviewer to see Reviewer wdrB's Q1 for more clarification and detailed interaction among components. We will make it clearer in our revised version.
**Q2: Details of `output-format-as-query` decoding.**
**A2:** Thanks for your good question. Since it was also raised by other reviewers, we provide details about output-format-as-query in terms of data construction, training, and inference process in Common Questions Q1. We will make it clearer in our revised version.
**Q3: There are also VL tasks (e.g. VQA) which I think should be added to the main paper.**
**A3:** For VQA tasks, we qualitatively showcase the performance of VisionLLM on complicated VQA scenarios, as shown in Figure 2(d) and Figure F in the supplementary material. The reason is that conventional VQA metrics are unsuitable for measuring long and detailed answers (see L259-260), and the current GPT-based evaluation is unstable enough, see Reviewer wdrB's Q2 for more details.
**Q4: The language features are then injected into each scale of visual features through cross-attention.**
**A4:** Thanks for your careful review. Here we provide the pseudo-code of the cross-attention layer. In this code, we inject text features into each scale of visual features through cross-attention.
```python
def vision_language_fusion(img_features, text_features, text_masks):
outs = []
for img_feature in img_features:
img_feature = rearrange(img_feature, 'b c h w -> b (h w) c')
img_feature = cross_attention(src=img_feature, ref=text_features, key_padding_mask=text_masks)
img_feature = rearrange(img_feature, 'b (h w) c -> b c h w')
outs.append(out)
return outs
```
**Q5: the paper says "except a few LoRA parameters" – can you specify how many?**
**A5:** We set the LoRA rank to 64 and use LoRA on the QKVO (Query, Key, Value, and Output) in the attention layers, resulting in approximately 0.9% of trainable parameters. We will clarify this in the revised version.
**Q6: Do you know how much object specific knowledge is available in the pretrained DETR model? Is it possible to do an ablation here? I assume the DETR model has been pretrained on e.g. COCO detection, so it’s maybe not really a zero-shot task for the model?**
**A6:** The determination of whether it is a zero-shot scenario depends on the training and test samples. As Deformable DETR model trained on the COCO dataset acquires knowledge specific to objects, it cannot be truly considered as achieving zero-shot performance for COCO evaluation. Nonetheless, it is important to note that VisionLLM is specifically designed for open-ended tasks. During testing, _**we employ instruction descriptions with arbitrary object categories, arbitrary task descriptions, and output formats, which are unseen during training**_. By leveraging the vast world knowledge embedded in a pre-trained LLM, we have observed exceptional performance of our model in comprehending and effectively handling previously unseen categories, such as "red gamepad" (see Figure 1(a)), which are not included in the COCO categories.
**Q7: Why is two-stage training needed? Are there ablations showing the effects of this?**
**A7:** Thanks for your careful review. As depicted in Figure A in the supplementary material, we adopted a two-stage training approach to expedite the convergence of VisionLLM. Our experiments showed that the two-stage training, starting from easy to hard tasks, resulted in faster convergence than a single-stage training approach.
**Q8: Are the ResNet parameters pretrained? The paper just says “we initialize the model with the pre-trained weights of D-DETR, BERT, and Alpaca-7B”.**
**A8:** Yes, the ResNet parameters are loaded from the pre-trained weights of Deformable DETR.
---
Rebuttal Comment 1.1:
Title: reply
Comment: Thanks for your detailed reply and clarifications. I will retain my score and increase my confidence.
---
Reply to Comment 1.1.1:
Comment: Thank you for your recognition. Your feedback is highly valuable to us. We will carefully consider your suggestions and continuously improve our work. | Summary: This work presents a LLM-based framework VisionLLM for vision-centric tasks. VisionLLM treats images as a foreign language and aligns vision-centric tasks with language tasks using language instructions. Extensive experiments show that VisionLLM deliver comparable performance with task-specific models over different vision-centric tasks.
Strengths: The authors demonstrate the feasibility of using large language models as visual decoders in which vision information is treated as a foreign language. With this idea as the key mind, authors introduce a series of methods to aligned vision tasks in a matched format with LLMs, including designing language instruction, design the decoding process and adding additional vocabulary tokens. Conducted experiments demonstrate the feasibility of such pix2seq modelling can be scaled up as a generalist model. Glad to see that some possible limitations of this technical route are discussed for analyzing the gaps in experimental results.
Weaknesses: This work follows the idea of pix2seq to build a generalist model and the authors believe that it is natural to do so, without providing a sufficient explanation of the motivation behind it. Despite the popularity of LLMs recently, we should not blindly apply them to other modalities without in-depth consideration except we can clarify the benefits and demonstrate them accordingly. Authors claim that the reasoning abilitis and world knowledge of LLMs help vision tasks, but only numerical results can be viewed as kind of evidence for this claim. It's not clear and convincing for me from two aspects:
- Whether the reasoning abilities and world knowledge of LLMs truly help? and how?
- How about scaling up task-specific vision models when compared to VisionLLM? In other words, how can you demonstrate the performance superiority comes from using LLMs as vision decoders, instead of model scaling up?
There are a lot of teachnical designs introduced to model vision-centric tasks in a language-matched format. The reasonableness of some of these are hard to be guaranteed, which is subject to further discussion. One of the strangest designs in VisionLLM is the addition of extra tokens in the vocabulary to represent the position values and categories. This approach appears to go against the initial purpose of leveraging LLMs' world knowledge to gain an advantage in visual tasks. Besides,these additionally added tokens may cause ambiguities with those original ones in the vocabulary. I am looking forward to authors' understandings or/and some necessary evidences towards these.
The effevetiveness of VisionLLM is sorely evaluated on ResNet-50 and Intern-H backbones. The general applicability is subject to further evaluation.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Some questions have been listed in the weakness part. Other than that, more questions about the details are as follows:
1. How do you handle the <image> placeholders in the actual input of VisionLLM? Do you replace them with actual image tokens in the latent space?
2. For instance segmentation, the value of N varies for different objtects. How do you detetermine this value?
3. How are "tasks defined by instructions" parsed into formatted queries in "output-format-as-query" decoding? By pre-defined templates for different task categories?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: This paper tries to catch the takeoff of LLMs by building a LLM-based generalist model for vision tasks. But it cannot provide clear and convincing motivation statement, and cannot dive into the rationale of the proposed technical designs. There are also some unclear method statements as listed in my questions.
Overall, the idea behind is straightforward. But I think the entire community should be even more careful for these straightforward ideas, and think more in-depthly about whether they are truly reasonable and whether they can deliver real contributions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: This work follows the idea of pix2seq to build a generlist (generalist) model and the authors believe that it is natural to do so, without providing a sufficient explanation of the motivation behind it.**
**A1:** We argue that _**VisionLLM is not the scaling up of Pix2Seq**_. Although both models incorporate coordinate discretization for object detection, they differ significantly in task generality, model design, and decoding process, as explained in Common Questions Q2. Moreover, directly scaling up Pix2Seq cannot achieve an open-ended task model. Such a model would not converge, as we discussed in L130-135 of the supplementary material and Table A(d).
For the motivation behind this work, please note that this work focuses on open-ended vision tasks that can be customized by users according to their needs. We have summarized the limitations of the existing paradigms for open-ended tasks in Figure 1 and the first two paragraphs of the Introduction, which inspire us and shape the main objective of our work. To achieve open-ended tasks, we tackle the challenge of aligning vision tasks and LLMs from various aspects including: instruction design, model design, and training. We have explained the motivation for each design choice in the first paragraph of each subsection of the method section.
**Q2: Whether the reasonding (reasoning) abilities and world knowledge of LLMs truely (truly) help? and how?**
**A2:** LLM plays a crucial role in our open-ended task framework, for the following reasons:
(1) LLM can parse instructions, which is a key feature of our system. As we explained in Reviewer TifL's Q1, our system cannot converge without instruct-tuned LLM. Furthermore, instruction parsing enables models to comprehend the target object mapping and the output format of the perception tasks. LLM is the only model that possesses this feature, as it is pre-trained on a large corpus of user instruction and code data.
(2) LLM also facilitates image description with controllable text length and visual question answering with complex reasoning, as illustrated in Figure 2(c)(d), and Figure F in supplementary material. These tasks require the capabilities of instruct-following and relation reasoning among objects. These capabilities are not learned from visual data and models, but from LLMs pre-trained on web-scale NLP data, as demonstrated by [1].
**Q3: How about scaling up task-specific vision models when compared to VisionLLM? In other words, how can you demonstrate the performance superiority (that) comes from using LLMs as vision decoders, instead of model scalling (scaling) up?**
**A3:** This is a common misconception. Model scaling up could not lead to open-ended task capability. Large-scale vision models like ViT-22B, Swin-G, and InternImage-G are still limited by tasks in specific formats. They cannot handle vision tasks with different or unknown formats. _**So even if we scale up task-specific vision models, they are still not comparable to our model in terms of open-ended tasks.**_ On the contrary, LLM pre-trained on web-scale corpus is proved to effectively understand the user instructions and provide reasonable answers, which is important for VisionLLM. The benefits of LLM for our model can refer to Reviewer 87vj's Q2.
**Q4: One of the strangest designs in VisionLLM is the addition of extra tokens in the vocabulary to represent the position values and categories. Besides, these additionally added tokens may cause ambiguities with those original ones in the vocabulary.**
**A4:** _**Extra tokens are a common way to extend the capabilities of an LLM**_, especially when supporting a new language and task [2]. To support vision tasks, the original vocabulary of LLM is not enough (see L210-213), it is necessary to increase the token size and ensure alignment during training. These tokens will be aligned with the LLM by instruction tuning. _**They will not conflict with the original tokens**_, because in our constructed training data, the newly added tokens are only used for vision tasks without any overlapped semantics with the original tokens.
**Q5: The effectiveness of VisionLLM is sorely evaluated on ResNet-50 and Intern-H backbones. The general applicability is subject to further evaluation.**
**A5:** We evaluate VisionLLM on two representative backbones of different scales. ResNet-50 is the most representative common-scale backbone, while InternImage-H is a large-scale backbone with state-of-the-art performance. We provide additional experiments using ViT-B as a vision encoder, through which we reach similar conclusions as ResNet-50 and InternImage. See Common Questions Q3 for more details.
**Q6: How do you handle the <image> placeholders in the actual input of VisionLLM? Do you replace them with actual image tokens in the latent space?**
A6: Yes, for `<image>`, we replace it with image tokens in the latent space.
**Q7: For instance segmentation, the value of N varies for different objects. How do you determine this value?**
**A7:** During training, the number of points varies randomly with the language instruction. During inference, the number of points N is specified by the user in his/her instructions.
**Q8: How are "tasks defined by instructions" parsed into formated (formatted) queries in "output-format-as-query" decoding? By pre-defined templates for different task categories?**
**A8:** See details of the output-format-as-query decoding in Common Questions Q1. We introduce the data construction, training, and inference details of the decoding process.
[1] Ouyang, Long, et al. "Training language models to follow instructions with human feedback." Advances in Neural Information Processing Systems 35 (2022): 27730-27744.
[2] Schick, Timo, et al. "Toolformer: Language models can teach themselves to use tools." arXiv preprint arXiv:2302.04761 (2023).
---
Rebuttal Comment 1.1:
Title: Reply to Rebuttal
Comment: Thank you for your efforts in providing the rebuttal. However, some of my questions have not been fully understood, and the current rebuttal has not convinced me and addressed my concerns. I look forward to a more in-depth discussion.
In my review comments, I did **NOT** suggest that "***VisionLLM is not the scaling up of Pix2Seq.***" What I would like to discuss is whether the modeling of VisionLLM is reasonable enough. In VisionLLM, visual information (e.g., coordinates) is decoded by a LLM using additionally added tokens. Such modelling raises a series of questions that we really need to take seriously:
**How does the using of LLM impact the vision tasks themselves?** It is a common sense that LLM can understand instructions and enable the versatility over different tasks. What I am concerned is the impact of LLM when applied to visual tasks. Is its impacts positive or negative for vision tasks? And WHY?
As shown in your provided experiment results in this rebuttal, the proposed VisionLLM-ViT-B has lower AP on Instance Seg. and lower BLEU-4 on Captioning compared to Pix2Seq v2 (ViT-B), when they use the same vision encoder. Is the performance drops of VisionLLM caused by the different model it use? How about using a vision decoder (or a vision decoding head) instead of a language decoder?
**How does the using of additionally added tokens impact the effectiveness?** "Extra tokens are a common way" has never been a responsible answer for "Is this modeling correct (although it is often used)? ". After reading authors' responses, I am still confused about:
- Does adopting additionally added tokens run counter to the open-ended purpose? For instance, the added classification tokens correspond to classification semantics in a deterministic way as stated in Line224, and their number is limited. What if the target category falls ouside of these added classification tokens? Is this a close-set limitation against open-ended?
- Why don't these additionally added markers cause ambiguity with the original markers in the vocabulary? For example, tokens for numbers (0~9) simultaneously exist in both original tokens and additionally added tokens. How does the model handle them?
- Why use auto-regressive decoding for the additionally added tokens? When we expand the vocabulary of LLM with additional tokens, we need to decode these tokens auto-regressively as for other tokens in the vocabulary. Compared to directly regress coordinates or perform classification over these tokens, is this auto-regressive decoding for these tokens a correct manner?
I hope the authors can deeply consider these important questions and provide responsible answers. Following commonly used methods may deepen the misguidance of the research community sometimes.
---
Reply to Comment 1.1.1:
Comment: We appreciate your time and effort in reviewing. We are open to engaging in a more in-depth discussion regarding this work.
**Q9: The statement of "VisionLLM is not the scaling up of Pix2Seq".**
**A9:** In your comments, it was mentioned that "Conducted experiments demonstrate the feasibility of _**scaling up**_ such pix2seq modeling as a generalist model", and also raised the question "How about _**scaling up**_ task-specific vision models when compared to VisionLLM?". Therefore, the statement _**"VisionLLM is not the scaling up of Pix2Seq"**_ has been emphasized to clarify the contribution of this work and avoid potential misunderstanding.
**Q10: How does the using of LLM impact the vision tasks themselves? It is a commen (typo: common) sense that LLM can understand instructions and enable the versatility over different tasks. What I am concerned is the impact of LLM when applied to visual tasks. Is its impacts positive or negative for vision tasks? And WHY?**
**A10:** _**LLM plays a crucial role in this work due to its parsing and instruction-following capabilities. These capabilities serve as the foundation for defining and understanding open-ended descriptions of vision tasks.**_ We have explained this _**in Q2**_, and provided additional evidence _**in the follow-up question 1 of Reviewer TifL**_. Furthermore, we also explained _**in Q3**_ that simply scaling up the decoding head without web-scale corpus pre-training cannot achieve the same capability as LLM. Therefore, LLM is a positive and crucial component for open-ended vision tasks from this perspective.
Based on this point, we introduce VisionLLM, a viable framework with a series of tailored designs that align vision tasks with LLMs. As we mentioned _**in follow-up question 2 of Reviewer TifL**_, due to the need to unify various vision tasks, the framework makes compromises that may affect its performance, especially in segmentation tasks. Additionally, Pix2Seq v2 has 128 polygon points, which is 4 times more than our model. It also uses ensemble and crop-then-segment techniques to enhance the segmentation results, but these techniques are independent of the open-ended task and are not considered in this work.
Regarding image captioning, as we explained in _**Q2 of Reviewer kyDb**_, the linguistic capabilities of our model are aligned with LLM, resulting in longer and more detailed responses. If we discard the LLaVA-Instruct-150K dataset and train on COCO Captions, our model could achieve better performance, as demonstrated in the table below:
| Model | BLEU-4 |
| ---------------- | ------ |
| Pix2Seqv2-ViT-B | 34.9 |
| VisionLLM-R50 | 31.0 |
| VisionLLM-R50* | 33.0 |
| VisionLLM-ViT-B | 31.5 |
| VisionLLM-ViT-B* | 35.6 |
\* indicates discarding the LLaVA-Instruct-150K dataset and training on COCO Captions
However, there is an inconsistency between the standard metric (favoring shorter text) and the user experience. If we prioritize alignment with the standard captioning benchmark, it may sacrifice user-friendliness and the overall versatility of the model. In contrast, this work aims to tackle user-defined visual tasks in a flexible manner and provide a practical framework that addresses open-ended tasks effectively. So unlike previous models (e.g., Pix2Seq v2) that pursue superior performance on _**pre-defined vision tasks**_, our approach prioritizes _**open-ended tasks**_ to meet the diverse needs of users. | Summary: This paper introduces VisionLLM, an instruction-following agent that can perform various vision-only (classification/detection/segmentation) and vision-language (captioning/VQA) tasks. The proposed model connects a pre-trained visual backbone with a language decoder Alpaca with a language-aware image-tokenizer. To unify vision-only and vision-language tasks, VisionLLM adopts different language instruction formats. Furthermore, it proposes an “output-format-as-query” framework for efficient parallel decoding for vision-centric tasks.
Strengths: The paper presents an impressive effort in developing a large decoder for vision-only and vision-language tasks, using state-of-the-art multimodal foundational models while treating images as foreign language. The technical details are comprehensive and sound. Extensive experiments demonstrate the effectiveness of the proposed system compared to state of the art Pix2Seq approaches. Ablation studies show insights on how the system perform in single-/multi-task scenarios and different image tokenizing schemes.
Weaknesses: Even though the proposed system is technically sound, it is still quite complicated. It is unclear what its major advantages are compared to previous Pix2Seq methods and task-specific models.
I have some doubts about the system design:
- Is the pre-trained instruction-following LLM (Alpaca) crucial in your system design? Can the system perform as well with a (non-instruction-following) LLaMA? Do more advanced instruction-following agents bring advantages over naive LLMs such as T5?
- One of the major disadvantages of using a large-scale pre-trained LLM (Alpaca) is the increased training and inference costs. Could you compare the efficiency of the proposed system to prior task-specific models/generalist models that do not use such large-scale pre-trained decoder?
- Why not adopt a ViT-B visual encoder to facilitate comparison to other generalist models such as Uni-Perceiver and Pix2Seq (as in Table 1)?
The paper is overall well-written. I found one typo:
L166: placeholdersok?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have some questions about technical details:
- For detection/segmentation tasks, I am confused how the proposed “output-format-as-query” approach (L227-239) works with variable numbers of objects per image. How many “<cls> <x1> <y1> <x2> <y2>” did you send to decoder during training and inference?
- How does the “output-format-as-query” approach avoid parallel decoding for image captioning (L232)? As for as I understand, the decoder still adopts casual attention masking and therefore it seems token-by-token generation is necessary.
- The proposed system supports customization of number of points for segmentation tasks — does it require specific training paradigms, i.e., balancing the number of masks with different number of points?
- Does your system generalize to unseen instructions during test time?
- What is the sampling procedure for open-ended tasks? For example, do you use top-k/nucleus sampling?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No. Please include discussions on the use of foundation models and the potential biases your system might inherit from these pre-trained models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Is pre-trained instruction-following LLM (Alpaca) crucial in your system design?**
**A1:** The instruction-following LLM is important for the convergence of VisionLLM. We have observed that Alpaca converges more easily than LLaMa. But the LLM is not limited to Alpaca, Flan-T5 (an instruction-following version model of T5) is also a good alternative.
**Q2: Time cost analysis.**
**A2:** Thanks for your suggestion. As shown in the following table, we compare the inference speed of Pix2Seq and VisionLLM. Specifically, we executed tests on a single A100 GPU, utilizing code and model weights from Pix2Seq's official repository. For both methods, we set the batch size as 1 and the image size as 1024x1024.
As can be seen from the table, although VisionLLM is equipped with a large LLM-based decoder, its inference speed is faster than Pix2Seq. This shows that VisiomLLM has an acceptable inference speed thanks to the proposed output-format-as-query decoding. We will add time cost analysis in our revised version.
| Method | FPS | Times per Image |
| --------------- | --------- | --------------- |
| VisionLLM-R50 | 5.1 img/s | 197.4 ms |
| Pix2Seq-R50 | 4.4 img/s | 227.3 ms |
| VisionLLM-ViT-B | 4.0 img/s | 251.7 ms |
| Pix2SeqV2-ViT-B | 3.4 img/s | 294.1 ms |
**Q3: Why not adopt a ViT-B visual encoder to facilitate comparison to other generalist models such as Uni-Perceiver and Pix2Seq (as in Table 1)?**
**A3:** We evaluate VisionLLM on two representative backbones of different scales. ResNet-50 is the most representative common-scale backbone, while InternImage-H is a large-scale backbone with state-of-the-art performance. We provide additional experiments using ViT-B as a vision encoder, through which we reach similar conclusions as ResNet50 and InternImage. See Common Questions Q3 for more details.
**Q4: How many “<cls> <x1> <y1> <x2> <y2>” did you send to the decoder during training and inference?**
**A4:** During both the training and inference phases, we input 100 sets of "<cls><x1><y1><x2><y2>" to the decoder, generating 100 object predictions. Those predictions with higher confidence scores will be retained, adhering to a common practice of the object detection task. We will make it clearer in our revised version.
**Q5: How does the “output-format-as-query” approach avoid parallel decoding for image captioning (L232)?**
**A5:** Sorry for the misunderstanding. We uniform the output format for various types of tasks in the form of natural language tokens. However, different types of tasks use different output formats, as illustrated in Figure 4 of the main paper.
For perception tasks, we use "<cls><x1><y1> ..." as the output format, employing the “output-format-as-query” approach for parallel decoding.
For understanding tasks, such as image captioning and VQA, we use "<bos>" as the output format, following a token-by-token generation process. We will provide a more explicit explanation of this aspect in the revised version.
**Q6: Does it require specific training paradigms?**
**A6:** No, VisionLLM does not require specific training paradigms. Like those of LLMs, the task instructions are randomly changed in terms of task type, task target, and output format (including the number of points).
**Q7: Does your system generalize to unseen instructions during test time?**
**A7:** Of course, our system could generalize to unseen instructions during test time. We incorporate randomized task descriptions, diverse task output formats, and randomized object categories during training. As a result, the system exhibits robustness in the face of changes in instructions.
**Q8: What is the sampling procedure for open-ended tasks?**
**A8:** We employed top-1 sampling in our approach. Utilizing more intricate sampling techniques, such as top-k sampling, could potentially lead to improved performance.
---
Rebuttal Comment 1.1:
Title: Follow-up questions
Comment: Thanks the authors for the comprehensive rebuttal. I have a few follow-up questions:
**1. *"Alpaca converges easily"* does not really convince me on its importance in the system design. Are there any references on *instruction-following* LLMs are easier to optimize and converge?**
**2. Uni-Perceiver-V2 / Pix2Seq v2 are much stronger at segmentation tasks (when using the same ViT-B backbone). Why?**
**3. Do you have any evidence on *generalizing to unseen test instruction*? To what extent does it work and when does it fail?**
---
Reply to Comment 1.1.1:
Comment: **Q1. "Alpaca converges easily" does not really convince me on its importance in the system design. Are there any references on instruction-following LLMs are easier to optimize and converge?**
**A1:** We would like to discuss this issue from two perspectives as follows:
(1) _**The LLM with instruction-following and parsing capabilities is able to effectively interpret the vision task instructions, which helps to reduce the loss during the early stage of model training.**_ To preserve the language ability of the LLM itself, we freeze the weights LLM during training. If we use LLMs without instruction-following, the frozen LLM struggles to learn this ability. Therefore, we directly utilize the instruction-following LLMs. The example below demonstrates that Alpaca (instruction-following LLaMA) can parse category mapping relationships and the required output format for the task, while LLaMA cannot achieve this.
```
System message:
"Below is an instruction that describes a task. Write a response that appropriately completes the request.
Input: {input}
Output:"
Input:
"class set: {'person': <c0>, 'car': <c1>, 'table': <c2>, 'cat': <c3>, 'television': <c4>, 'a man in a black hat': <c5>}. What is the class index associated with the class 'a man in a black hat' in the given class set?"
# Alpaca output
Output:
"The class index associated with the class 'a man in a black hat' in the given class set is 5."
# LLaMA output
"Answer: 0 Input: Given an array of strings, remove the last element of the array. The returned array should be a new array with the last element removed."
```
(2) For _**the detailed loss curves**_ (since we cannot include figures at this time, we provide the loss values every 2000 iterations), the model trained using Alpaca reduces demonstrates faster loss reduction, particularly in the early stages. We do not rule out the possibility that LLaMA can converge if it is unfrozen and allowed to train with more epochs, but at least under our current experimental setting (frozen LLMs, 50 epochs), Alpaca is better at following instructions than LLaMA.
```
# LLaMA loss
6.08, 4.61, 4.46, 4.41, 4.33, 4.22, 4.15, 4.10, 4.04, 4.01, 3.95, 3.87, 3.85, 3.98, 4.36, 4.63
# Alpaca loss
5.26, 4.56, 3.97, 2.85, 2.77, 2.69, 2.60, 2.62, 2.54, 2.56, 2.51, 2.50, 2.50, 2.49, 2.47, 2.45
```
We will make it clearer in our revised version.
**Q2: Uni-Perceiver-V2 / Pix2Seq v2 are much stronger at segmentation tasks (when using the same ViT-B backbone). Why?**
**A2:** Firstly, _**VisionLLM is distinct from models for pre-defined tasks (e.g., Uni-Perceiver-V2, Pix2Seq v2), as it has the capability to handle open-ended tasks customized by users, providing greater flexibility and versatility.**_ To be able to unify and flexibly customize tasks, we have made some design and trade-offs in terms of task formulation, model selection, and training methods, which may result in some performance losses, particularly on segmentation tasks.
Compared to Uni-Perceiver-V2, our model utilizes polygons with discrete coordinates to represent instance masks, ensuring a uniform task output. _**This results in two levels of performance loss**_: (1) the conversion from mask to polygon representation results in performance degradation, and (2) the conversion of polygon coordinates to integers also incurs performance loss (see L347-L353).
While Pix2Seq v2 also employs polygons to represent masks, they use _**128 polygon points**_ (at least 4 times more than our model) to enhance segmentation performance. Additionally, they utilize ensemble methods to _**merge the results of 8 inference runs**_ and _**adopt a crop-then-segment approach**_ to further improve segmentation performance. In contrast, VisionLLM requires an LLM-based decoder to support general user instruction parsing. This restricts us from using more than 32 polygon points on existing hardware, and we have not implemented generalist-model-agnostic techniques (such as ensemble and crop-then-segment) to enhance our segmentation results.
We will clarify this in our revised version.
---
Reply to Comment 1.1.2:
Comment: **Q3. Do you have any evidence on generalizing to unseen test instruction? To what extent does it work and when does it fail?**
**A3:** There are four scenarios:
(1) **Customized Detection Target:** Our model is trained on the COCO dataset, but it is not limited to detecting only the categories present in the dataset. The categories our model can detect can be question sentences or descriptions in natural language. This flexibility is demonstrated in Figure 2(a) and Figure I in the attached PDF file for rebuttal.
(2) **Task Description Flexibility:** Our model supports user input in natural language at the task description level. It's important to note that even for the same task, the descriptions can vary. For example, the grounding task can be described in multiple ways, such as:
```
# Long instruction
Please identify all objects belonging to the category set {<expression>: <cls0>}. For each detected object, specify its location within the range <range> by determining the offsets of top-left and bottom-right corners relative to the center point. To indicate the object's class and location, provide the output in the format (c, x1, y1, x2, y2), where 'c' represents the class index starting from 0, and (x1, y1, x2, y2) correspond to the offsets of the bounding box corners. The image is: <image>
# Short instruction
Please locate the object mentioned in the category set {<expression>: <cls0>}. The image is: <image>
```
These descriptions can have different lengths and sentence structures. We have validated the stability of our model with different prompts in Figure B in the supplementary material, showcasing its ability to generalize to random and unseen descriptions.
(3) **Customized Output Formats:** Our method allows for customized output formats, even in object detection tasks. For instance, we can have the format as (c, x1, y1, x2, y2) or (x1, y1, x2, y2, c). We can also control the number and meaning of each point. For example, in Figure 1(a), if we modify the prompt to (x1, y2, x2, y1, c) (outputting the bottom left and top right points), the output can be as follows:
```
"The bounding boxes are [(226.4, 347.4, 363.1, 229.8, <c0>), (441.1, 269.9, 538.6, 183.5, <c1>)]."
```
**(4) Flexible Task Combination:** VisionLLM can also combine different tasks through instructions. For example, with the following instruction, we can combine localization and question-answering tasks to count the number of white cats.
```
"Locate all the objects in the image that are part of the category set {'white cat': <c0>} and output their index of class label starting from 0 and offsets of bounding box coordinates. The bounding box should be a rectangle that covers the entire object. The offsets should be given as top-left and bottom-right corners of the rectangle relative to the center point and should be within <range>. The output format should be (c, x1, y1, x2, y2). The image is: <image>. <cls><x1><y1><x2><y2>...<cls><x1><y1><x2><y2>. How many white cats are in this image?"
```
Despite demonstrating zero-shot capabilities in some unseen scenarios, VisionLLM has some limitations. Due to the limited scale of training data in this version, the connection between vision and language concepts still needs improvement, leading to hallucination issues in VQA and captioning tasks. Additionally, it also struggle with handling specialized terms in niche domains, such as "Magnetic Accelerator". | Summary: This work proposes VisionLLM, a unified framework for vision tasks and vision-language tasks, using natural language task prompts. It demonstrates capabilities in a good variety of tasks.
Strengths: - This is the first attempt to use natural language prompts for vision-centric tasks such as object detection and instance segmentation. I think this work is tackling an important problem.
- The language-guided image tokenizer is a novel component to convert image into tokens guided by text.
- VisionLLM treats image tokens the same way as text tokens, such that the entire task (including image and class list) can be encoded as one piece of text. This converts vision tasks into sequence generation problem handled by a LLM. (I'm actually not sure if this understanding is correct, so I ask this in the Questions section as well.)
Weaknesses: The framework is not a simplistic one, consisting of various components and types of losses, e.g. using bipartite matching for one type of outputs. Representing image as a set of ${e_i, l_i}$ is also kind of specific.
The model can do VQA but I didn't see any VQA results in the paper.
Overall my main complaint is that the framework and pipeline seem a bit complicated, but it is designed to incorporate a wide variety of tasks and seems effective at it. It is a good first attempt at using natural language and LLM to tackle vision tasks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How are image tokens $T$ and `<text>` passed into the LLM? On line 166 it says `<image>` and `<question>` are placeholder tokens for image tokens and question tokens. I imagine `<question>` will be replaced by the actual question inside the language instruction text (`<text>`), but will the `<image>` token be replaced by the image tokens $T$? Figure 3 seems to imply that $T$ is passed into the LLM separately from `<text>`. I think it's more flexible if all inputs are formulated into one piece of text to pass into the LLM, as one can directly extend it to use more than one images, etc.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Pipeline seems a bit complicated.**
**A1:** We would like to recap the components of VisionLLM. It has two key components: the language-guided image tokenizer and the LLM-based decoder. They work together in the following way:
(1) Firstly, the visual encoder extracts features from the image at different scales, and the text encoder obtains features from the language input.
(2) Then, in the language-guided image tokenizer, the multi-scale image features interact with the language features to generate language-guided image tokens. Each token is represented by embedding and location information.
(3) Finally, these image tokens replace the placeholder `<image>` in the language prompts. The resulting language prompts are fed into the LLM-based decoder for open-ended tasks.
In addition, some specific designs, e.g., bipartite matching, representing images as a set of $(e_i, l_i)$, are useful to accelerate the convergence of VisionLLM. Generally speaking, all the components are closely interconnected and indispensable for VisionLLM. Building a generalist model for open-ended tasks is a complicated system engineering, and we will explore more simplified implementations in the future.
**Q2: The model can do VQA but I didn't see any VQA results in the paper.**
**A2:** For VQA tasks, we qualitatively showcase the performance of VisionLLM on complicated VQA scenarios, as shown in Figure 2(d) and Figure F in the supplementary material. There are two-fold reasons:
(1) Our model was trained on the LLaVA-Instruct-150K dataset, which encourages the model to generate long and detailed answers for visual questions, as explained in L256-257. However, most existing VQA benchmarks expect short answers, which makes our model score low (VisionLLM-R50: 33.86 vqa-score on VQAv2 test-dev). It is unfair to compare our model with other models on these benchmarks.
(2) As we discussed in the Q4 of Reviewer kyDb, the existing GPT-based evaluation methods (e.g., metrics in LLaVA) are unstable and are affected by the online closed-sourced model released by OpenAI. So there is currently no widely recognized standard for such VQA evaluation.
**Q3: How are image tokens `<image>` and `<text>` passed into the LLM?**
**A3:** The image tokens are directly placed at the placeholder `<image>` at the language prompt. We will make it clearer in our revised version.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks to the authors for the rebuttal. I maintain my opinion that the method is not simplistic enough, but I also maintain my positive rating of a weak accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your efforts in the review. | Rebuttal 1:
Rebuttal: Dear all reviewers:
We sincerely appreciate the reviewers for their time and effort in the review. This submission received 5 review comments, and 4 reviewers gave positive scores. We first address some common questions, followed by detailed responses to each reviewer separately. We hope our responses could clarify existing doubts.
 
### Common Questions
**Q1: Details of the `output-format-as-query` decoding.**
**A1:** The `output-format-as-query` decoding technique is designed to parse the standard output format, which is compatible with the LLM-based decoder, from user instructions. The details are as follows:
**Data Construction:** Following self-instruct [1], we create various user instructions for each task to simulate human interaction. Here are some examples:
```
System message:
"You are an AI assistant for translating the user instructions to the standard prompt. Please help me parse the following input.
Input: {input}
Output:"
# Object detection
Input:
"The image is: <image>. Please thoroughly examine the image and detect all objects belonging to the category set {'person': <c0>, 'bicycle': <c1>, 'car': <c2>, 'motorcycle': <c3>}."
Output:
"The bounding boxes are <cls><x1><y1><x2><y2><cls><x1><y1><x2><y2>...<cls><x1><y1><x2><y2>."
# Image caption
Input:
"The image is: <image>. Please write a short caption for this image."
Output:
"The image shows that <bos>"
```
**Training:** After obtaining the data of user instructions, we finetune Alpaca using the next token prediction task for supervision, making it able to accomplish the output format parsing process.
**Inference:** As described in L229-236 and Figure 4, the inference process involves the following steps:
(1) We first use the fine-tuned Alpaca to parse the user instructions into standard output formats for different tasks. For instance, in the case of object detection, the output format may be "The bounding boxes are <cls><x1><y1><x2><y2><cls><x1><y1><x2><y2>...<cls><x1><y1><x2><y2>". For image captioning, the output format could be "The image shows that <bos>".
(2) The parsed outputs are then appended to the original user instructions as suffix texts. The extended instructions are fed into the LLM-based decoder as queries.
(3) Since the output format contains special tokens, such as <cls>, <x1>, <y1>, <x2>, <y2>, and <bos>, by treating these tokens as queries, the LLM-based decoder can predict the corresponding results. This approach enables the detection task to run in parallel like the cloze task, while the captioning task remains the next token prediction.
We will make it clearer in our revised version.
 
**Q2: The difference between VisionLLM and Pix2Seq.**
**A2:** Although both VisionLLM and Pix2Seq v1/v2 employ coordinate discretization for object detection tasks, _**they differ significantly in terms of task generality, model design, and decoding process**_.
**Task Generality:** VisionLLM allows users to customize vision tasks using language instructions, supporting user-tailored output formats, task targets, task descriptions, etc. In contrast, Pix2Seq v1 is a special model for object detection, and Pix2Seq v2 only supports pre-defined task switching with learnable prompt tokens, lacking the flexibility of task customization.
**Model Design:** VisionLLM consists of a series of careful designs for open-ended tasks, including (1) language instructions that align vision tasks with NLP tasks; (2) a flexible tokenizer guided by natural language instructions (Pix2Seq v2 uses _**unreadable embedding**_ for task switching); and (3) an open-ended task decoder based on LLMs along with an improved decoding process.
**Decoding Process:** Pix2Seq struggles to converge in open-ended task scenarios with random user instructions (see Table A(d) in supplementary material). VisionLLM solves this problem effectively by using its output-format-as-query approach, which enables the model to work with the Hungarian matching loss and handle highly random open-ended task instructions efficiently.
To sum up, VisionLLM and Pix2Seq are distinct models. Pix2Seq is a pioneering generalist model but has limitations (see Figure 1(a)). VisionLLM explores new possibilities for end-to-end models that unify vision and language tasks in the LLM era. We will clarify this in the revised version.
 
**Q3: Evaluation VisionLLM on more backbones.**
**A3:** We chose to evaluate our model on ResNet-50 and InternImage-H backbones because ResNet-50 is widely recognized as a representative backbone at a common scale, and InternImage-H is known for its large-scale backbone with top-notch performance. Results on ResNet-50 and InternImage-H demonstrate the generality of VisionLLM on backbones at different parameter scales. In the table below, we have included the results of ViT-B, which still meet our expectations.
| Method | Backbone | Open-Ended | | Detection | | | Instance Seg. | | Grounding | Captioning | |
| - | - | - | - | - | - | - | - | - | - | - | - |
| | | | AP | AP50 | AP75 | AP | AP50 | AP75 | P\@0.5 | BLEU-4 | CIDEr |
| Uni-Perceiver | ViT-B | - | - | - | - | - | - | - | - | 32.0 | - |
| Uni-Perceiver-MoE | ViT-B | - | - | - | - | - | - | - | - | 33.2 | - |
| Uni-Perceiver-V2 | Swin-B | - | 58.6 | - | - | 50.6 | - | - | - | 35.4 | 116.9 |
| Pix2Seq v2 | ViT-B | - | 46.5 | - | - | 38.2 | - | - | - | 34.9 | - |
| VisionLLM-R50 | ResNet-50 | ✓ | 44.6 | 64.0 | 48.1 | 25.1 | 50.0 | 22.4 | 80.6 | 31.0 | 112.5 |
| VisionLLM-ViT-B | ViT-B | ✓ | 47.3 | 68.6 | 51.4 | 26.8 | 57.7 | 22.6 | 81.3 | 31.5 | 113.1 |
| VIsionLLM-H | Intern-H | ✓ | 60.2 | 79.3 | 65.8 | 30.6 | 61.2 | 27.6 | 86.7 | 32.1 | 114.2 |
[1] Wang, Yizhong, et al. "Self-instruct: Aligning language model with self-generated instructions." arXiv preprint arXiv:2212.10560 (2022).
Pdf: /pdf/778d7e71a376d80dabd02780cc4e5db92263ef37.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper construction a vision/language model by passing visual features into a LLM. They trained the model on several standard tasks as well object detection and referring expression by adding special location tokens to the LLM's vocabulary. The model features a text-guided image tokenizer and an efficient decoding approach when generating segmentation or bounding boxes.
Strengths: - The suggestion in section 3.4 seems like a nice way to avoid auto-regressive decoding when it is not needed, although the method was a bit hard to parse.
- Showing that the approach of adapting visual features for LLM can work objet detection and segmentation is interesting, these tasks have been less well explored in this area.
- The quantitive examples in the appendix at least suggests the model has instruction following capabilities similar to other models tuned on the LLaVA dataset.
Weaknesses: - The scores are decent but not amazing. Only slightly better than pix2seq if using the same backbone, and the CiDER scores are lower then relatively simple models like ClipClap or VL-T5.
- The central idea is essentially following the pix2seq method combined with the now pretty well studied method of adapting visual features for a LLM approach, which makes sense but does not feel hugely novel to me.
- The authors should consider evaluating follow the methodology of LLAVa given they are using that data.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Did the authors try pre-training the model? Pre-training to initially learn the alignment for the vision/language component is common practice for these kinds of models.
Can the model generalize to outputting bounding boxes for tasks other than refexp or object detection? For example for pointing VQA questions where you output a bounding to answer a question.
In section 3.4, what does "...feed the tokens of structural output format as queries to the decoder" mean? That they are used as the initial starting token? What happens if there are multiple objects so they are then multiple class and x1 coordinates to produce? Or if the model needs to interleave text with structured output like in Figure 2a?
Table 1 is missing many models that achieve better scores, BEiT-3 is better at detection and captioning for example. I think they should be included for reference
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I think the authors should at least note some of the potential issues with these kinds of models (bias, potential for abuse my generating misinformation, hallucination, ect.)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Details of the `output-format-as-query` decoding.**
**A1:** We thank the reviewer's appreciation of our output-format-as-query design. In Common Questions Q1, we provide more details about output-format-as-query regarding the data construction, training, and inference process. Here, we answer your detailed questions about it:
**Q1-1: What does "...feed the tokens of structural output format as queries to the decoder" mean? That they as the initial starting token?**
**A1-1:** Yes, the parsed outputs are appended to the original user instructions, and the resulting instructions are then fed into the LLM-based decoder.
**Q1-2: What happens if there are multiple objects so they are then multiple classes and x1 coordinates to produce? Or if the model needs to interleave text with structured output like in Figure 2a?**
**A1-2:** For the perception tasks (e.g., detection), the output format consists of a string with 100 segments of "<cls><x1><y1><x2><y2>", which can accommodate multiple objects in the scene. For interleave text with structured output, our model naturally supports this, as constructed user instructions default to this format (see Common Questions Q1).
**Q2: The scores are decent but not amazing.**
**A2:** Besides performance scores, the primary objective of this work is to design a generalist vision model available for user-tailored open-ended tasks. However, there are a few issues that are beyond the scope of this paper:
(1) Due to the shared weights of our generalist model, conflicts may arise among different tasks, potentially leading to lower performance compared to specialized or foundation models that follow the "pre-train then fine-tune" paradigm (e.g., Pix2Seq v1, BEiT-3);
(2) We use LLaVA-Instruct-150K to preserve the language capability of LLMs during training, which tends to generate long and detailed captions. But these long captions are suboptimal for traditional BLEU and CIDEr metrics.
**Q3: The central idea is essentially following the pix2seq method.**
**A3:** We argue that our model significantly differs from Pix2Seq v1/v2 in terms of its ability to handle open-ended tasks, model architecture, and decoding process (See Common Questions Q2). We think the reason why the two models may seem similar is that both our model and the Pix2Seq series use coordinate discretization to model the perception task, but this is NOT the main contribution of this work.
**Q4: The authors should consider evaluating following the methodology of LLaVA given they are using that data.**
**A4:** The evaluating method in LLaVA is unstable as it is affected by the online version of GPT-4, which is a closed-source system and not available in all countries or regions. Furthermore, there is no widely recognized average standard for such evaluation at the moment. Therefore, we adopt a more stable and controllable evaluation approach in this work. We test the performance of VisionLLM by generalizing it to various representative visual perception and understanding tasks, and evaluate it with standard benchmarks. In addition, we also design variant evaluations based on standard benchmarks to examine the open-ended task ability of our models (see Table 2(a)(b), and Figure B and C in supplementary material).
**Q5: Did the authors try pre-training the model? Pre-training to initially learn the alignment for the vision/language component is common practice for these kinds of models.**
**A5:** The first training stage of VisionLLM involves open-ended detection tasks with random user instructions. This is a specific form of vision-language alignment.
**Q6: Can the model be generalized to output bounding boxes for tasks other than refexp or object detection? For example, for pointing VQA questions where you output a bounding to answer a question.**
**A6:** Yes, our model can be generalized to use bounding boxes to answer questions. For example, as shown in Figure 2(a), when you ask a question like "What is the child eating?" in the class set `<class>`, VisionLLM will predict the bounding box of the doughnut as the answer to this question. We present more examples in Figure I in the attached PDF file for rebuttal to show this feature of VisionLLM.
**Q7: Table 1 missing many models that achieve better scores, BEiT-3 is better at detection and captioning for example. I think they should be included for reference.**
**A7:** In Table 1, we have listed recently popular generalist models capable of handling various tasks using shared weights. Differently, BEiT-3 is a foundational model that utilizes additional decoders for fine-tuning, incorporating a range of specialized designs, which do not support open-ended tasks. We will add and discuss these works in our revised version. Thanks for your suggestion.
**Q8: I think the authors should at least note some of the potential issues with these kinds of models (bias, potential for abuse by generating misinformation, hallucination, etc.**
**A8:** Thank you for the suggestions! We will attempt to address this through some de-biasing methods before the model is released. | null | null | null | null | null | null |
RevColV2: Exploring Disentangled Representations in Masked Image Modeling | Accept (poster) | Summary: In this paper, the authors propose a new backbone RevColv2, which is suitable to the MIM pretraining and could learn disentangled representations during the pretraining. The strong experiment results show its effectiveness.
Strengths: Please refer to Questions
Weaknesses: Please refer to Questions
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: ### strength
1. The paper is well-written and easy to follow
2. The proposed method is a novel combination of Mask Image Modeling and RevCol
3. The experiment part shows strong results.
### weakness
1. Some details are unclear, line 103 says 'the un-masked patches are input into each bottom-up column'. While line 134 and Fig.2 say 'masked image patches are fed into the bottom-up columns and reconstruct unseen patches through top-down column'.
Which one is the correct way?
2. Compared with vanilla ViT, the proposed model is quite complex, so I'm wondering about the speed of both the pretrain and inference.
3. Fig.3 compare the MAE decoder with RevColv2 decoder and claims that the proposed decoder can disentangle the low-level and high-level feature better. But for the linear accuracy difference between Level1 and Level4, the difference of MAE is 14.6, while the proposed method is 18.7, the difference seems not significant.
4. PeCo is a strong ImageNet-1k classification baseline while only segmentation result on ADE20K is reported in the paper, why?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Please refer to Questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer W68W,
Thank you for your feedbacks. We will address your concerns below.
**Q1**: Some details are unclear, line 103 says 'the un-masked patches are input into each bottom-up column'. While line 134 and Fig.2 say 'masked image patches are fed into the bottom-up columns and reconstruct unseen patches through top-down column'. Which one is the correct way?
**A1**: Sorry, Line 134 is an obvious mistake that we ignored during proofreading. The correct one should be 'As shown in Figure 2, in MIM pre-training, \textbf{un-masked} image patches are fed into the bottom-up columns and reconstruct unseen patches through top-down columns.' For a typical MIM task, the model receives unmasked patches in the encoder and reconstructs the masked patches (unseen patches) in the decoder. In our pre-training, this scheme remains the same. Thanks for pointing out this mistake and we will correct it in the next version.
**Q2**: Compared with vanilla ViT, the proposed model is quite complex, so I'm wondering about the speed of both the pretrain and inference.
**A2**: We give a detailed benchmark and analysis of the speed of RevColV2 on the global rebuttal, please refer to it.
**Q3**: Fig.3 compare the MAE decoder with RevColv2 decoder and claims that the proposed decoder can disentangle the low-level and high-level feature better. But for the linear accuracy difference between Level1 and Level4, the difference of MAE is 14.6, while the proposed method is 18.7, the difference seems not significant.
**A3**: Thanks for pointing out this. The difference between the top level and bottom level of the last column is only one aspect of the disentangle learning visualization. As we can see in Figure 3 in paper, more than the difference in the linear probing result of these two levels, the whole distribution of all levels is changed (performance in the left setting first increase then decrease from left to right columns, and the performance in the right setting is gradually increased). We think the whole distribution change is also an important aspect of the representation visualization of RevColV2 models.
**Q4**: PeCo is a strong ImageNet-1k classification baseline while only segmentation result on ADE20K is reported in the paper, why?
**A4**: Yes, PeCo makes really a great job, especially in ImageNet-1k classification. We will append this work to Table 2 in the next version. Specifically, PeCo and RevColV2 achieve comparable results on ImageNet1K dataset only (84.7 vs. 84.5 for base size model and 86.3 vs. 86.5 for large size model).
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' reply, most of my concerns are resolved, and I keep my score as weak accept | Summary: This paper proposed novel architecture to explore disentangled representations with masked image modeling. Different from previous MAE-like methods, this paper design a unified network and do not drop the decoder in downstream task. This paper showed that the disentangled representation is learned in different network levels. Experiments are done on ImageNet, MS-COCO and ADE20k for base and large model sizes.
Strengths: 1. The idea of combining MIM with disentangled representation learning is novel.
2. The idea of keeping the entire autoencoder tackles the problem of inconsistent representation between pre-training and fine-tuning.
3. The performance in downstream vision tasks is competitive.
Weaknesses: 1. Figure 2 is misleading, the pre-training target on ImageNet-1k is single MIM or combined MIM with image labels?
2. The network parameters in Table 1 and Table 2 is not consistent. E.g. for base-size, 101M in Table 1 and 88M in Table 2.
3. As described in line 204, the initialized weight for semantic segmentation is ImageNet-1k classification fine-tuned model and not MIM pre-trained model. It is not fair to compare it with a bunch of MIM pre-trained models.
4. The DropPath op seems to be conflict to the idea of disentangle representation in each level.
5. In the supplementary materials, the linear probing experiments is based on the bottom-up columns which is conflict to the idea of keeping the entire autoencoder.
6. This paper had several typos and grammar mistakes. E.g.
line 1: per-training -> pre-training
line 13: performances -> performance
line 15: intermediately -> intermediate
line 66: per-trained -> pre-trained
line 211: fine-turning -> fine-tuning
7. There are some factual errors in this paper. E.g. in Table 2, the target of ConvNext-B is label not pixel and the target of BEiT-L is DALL-E not pixel.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Question:
1. Speed. It seems like the training and inference speed of revcol architecture is slower than deeper plain ViT. What is the training and inference speed of RevColV2 in pre-training and fine-tuning?
2. What is the performance of dropping the decoder and only using the encoder for downstream tasks?
3. How to show the scalability of RevColV2?
4. Is there more evidence to show the representation difference of RevColV2 and RevCol? E.g. attention distance, KL divergence between different attention heads, etc.
5. What is the performance of not using DropPath?
6. Why using bottom-up columns representation for linear probing?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Let us answer your questions point by point.
**Q1**: Figure 2 is misleading, the pre-training target on ImageNet-1k is single MIM or combined MIM with image labels?
**A1**: Thanks for pointing out this issue. We do not depict the training task as a single task. Figure 2 illustrates the training target for both pre-training and fine-tuning. We use raw image pixels as targets in self-supervised pre-training, without any labels. All labels are included only in downstream tasks such as image classification. We will add more captions in Figure 2 in the next version.
**Q2**: The network parameters in Table 1 and Table 2 is not consistent. E.g. for base-size, 101M in Table 1 and 88M in Table 2.
**A2**: Tables 1 and 2 illustrate the parameters on different tasks. In Table 1, we show the network parameter on pre-training, which include parameters of both encoder and decoder. In Table 2, we show the parameter of image classification (fine-tuning and inference). As the classification head is applied on top of the last top-down column, only about half of the decoder participates in the calculation (upper triangular of the decoder). And that's why the parameter in Table 2 is lower than in Table 1. We will modify the paper in the revision to make it clear.
**Q3**: The initialized weight for semantic segmentation is ImageNet-1k classification fine-tuned model and not MIM pre-trained model. It is not fair to compare it with a bunch of MIM pre-trained models.
**A3**: Before conducting our experiments, we survey several MIM works including (HiViT, BEiT, ConvNeXt V2). All of them are using ImageNet fine-tuned model weights in the segmentation task. So we follow the settings of these previous MIM works. Indeed, we found image labels can help with the ADE20K segmentation tasks, but for detection, it could harm the performance.
**Q4**: The DropPath op seems to be conflict to the idea of disentangle representation in each level.
**A4**: To my understanding, the concern is about dropping parts of the input images could lead to information loss in propagation. In the training process (only fine-tuning, we do not add Droppath in pre-training), we add the Droppath operation inside each building block, while the shortcut bypass (Identity mapping) still keeps the input image from being discarded in propagation. During the inference process, Droppath does not work, so there is no conflict to feature disentangle.
**Q5**: The linear probing experiments is based on the bottom-up columns which is conflict to the idea of keeping the entire autoencoder. Why using bottom-up columns representation for linear probing?
**A5**: This is a good point. We use the bottom-up columns representation for linear probing because we found it has higher performance. However, it seems like a conflict with the idea of keeping the entire autoencoder. We made some investigation on these phenomena and found that: The reason why the linear probing result of bottom-up column representation is a little higher than the top-down column representation is the depth of the decoder columns. We use a shallower decoder in the default setting, because of the lightweight decoder can be quickly adapted to downstream fine-tuning. But it harms the performance of the linear probing of the decoder representation.
Although this light-weight setting harms the linear probing results of top-down columns, we are more focused on the downstream fine-tuning results in this paper. So we keep this light-weight (shallower) decoder setting by default. Nevertheless, we will report the linear probing of the top-down column representation in the revision. Specifically, the linear probing results of top-down column representations are 64.3\% for base size model and 77.2\% for large size model. These results are still significantly higher than other method in Table 5 in the supplementary material.
**Q6**: This paper had several typos and grammar mistakes. There are some factual errors in this paper.
**A6**: Thanks for pointing out the typos and mistakes. We have already rectify them.
**Q7**: What is the training and inference speed of RevColV2 in pre-training and fine-tuning?
**A7**: We give a detailed benchmark and analysis of the speed of RevColV2 on the global rebuttal mainly focusing on the inference, please refer to it. For training, it will draw the same conclusion with inference. But the speed in training is heavily dependent on the computation resources.
**Q8**: What is the performance of dropping the decoder and only using the encoder for downstream tasks?
**A8**: In the original submission, we conducted a key ablation study that only use the encoder architecture in both pre-training and fine-tuning. To my understanding, you are curious about that only using the encoder architecture in fine-tuning stage (the encoder is also pre-training with the entire auto-encoders). We further conduct experiments under this setting with RevColV2-B, and the experimental results are:
83.9\% (-0.8\%) top-1 accuracy on ImageNet1K dataset, 50.4 (-0.7) mIoU on ADE20K dataset. These results are consistent with our original submission and verify the effectiveness of the multi-column decoder design.
**Q9**: How to show the scalability of RevColV2?
**A9**: We investigate the data scaling ability of RevColV2 with the additional CLIP teacher and larger dataset. Please refer to the global rebuttal.
**Q10**: Is there more evidence to show the representation difference of RevColV2 and RevCol?
**A10**: We visualize the attention distance of each self-attention block between RevColv1-ViT-B and RevColv2-B. As shown in the submitted material, the attention distance in MIM pre-trained RevColv2 lies in a more diverse manner compared with the supervised trained RevCol. MIM models often have more diverse attention heads which tend to aggregate both local and global pixels. The conclusion is in accord with <https://arxiv.org/abs/2205.13543>, who did similar visualization on ViT.
---
Rebuttal Comment 1.1:
Comment: Thanks for the your detailed responses! I will keep my initial score. | Summary: This paper proposes to keep the entire auto-encoder architecture during both pre-training and fine-tuning based on RevCoI. It contains bottom-up columns and top-down columns, and the information is reversibly propagated and gradually disentangled between them. Better results are achieved on ImageNet-1k and downstream tasks.
Strengths: 1. Good motivation to keep the same structure for both pretraining and finetuning.
2. Better results are achieved compared with RevCol.
Weaknesses: 1. This paper is a little bit hard to follow, and I do not think the figures help a lot for understanding this paper. Maybe better visualizations/figures are needed.
2. Tiny and small size models are also suggested to exploit. Or the authors needs strong arguments why they did not do so.
3. Some papers achieves better results than this method, but they did not present them and compare with them.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What the performances for tiny and small size model?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer DHnY,
Thank you for your feedbacks. We will address your concerns below.
**Q1**: This paper is a little bit hard to follow, and I do not think the figures help a lot for understanding this paper. Maybe better visualizations/figures are needed.
**A1**: Figure 1 shows the motivation of RevColV2, and Figure 2 shows the overall pipeline of RevColV2 in both pre-training and fine-tuning. We are sorry to cause confusing to you about these illustrations. To better understand the motivation and method of RevColV2, we add a more detailed figure in the additional one page rebuttal material and hope it can address your concerns.
Except this figure, we also made some clear statement of the motivation and main method of RevColV2:
As shown in Figure 1 (a) in the original submission, traditional supervised method relies on mounts of image labels that learning to project a given image to the label space. However, the label space is a highly compact based on the one-hot scalar label. In other words, this label space under the supervised learning schema the information loss during pre-training. This is not consistent with the purpose of representation learning and will cause sub-optimal performance. Meanwhile, as shown in Figure 1 (b) in the original submission, exists masked autoencoder employs an encoder to embed the masked images into semantic features and a decoder to reconstruct unseen patches. Under this pre-training paradigm, features are rich in low-level information in both input and output. The semantic features, which are desired for downstream tasks, are reserved inside the network. A common method to utilize such semantic feature is to manually partition the encoder and decoder based on the amount of semantic information in features and discard the decoder during downstream fine-tuning. Even so, discarding parts of the pre-trained network could incur information loss when transferring to downstream visual tasks. RevColV2 tackles this problem by keeping the entire autoencoder architecture in both pre-training and fine-tuning. The key component of keeping entire network is separating the low-level and semantic information during the image reconstruction process in pre-training. To accomplish this, we re-design the architecture of RevCol. The new architecture contains a bottom-up reversible column encoder and a top-down reversible column decoder. The bottom-up columns and top-down columns are totally symmetric with masked images and encoder embedding as input. During MIM pre-training, the raw image reconstruction loss is connected to the end of the last column in decoder. Hence low-level information primarily sinks to the bottom level and semantic information moves upwards to other stages based on lossless propagation as shown in Figure 1 (d) in the original submission. For other architecture details, please refer to our updated overall view of RevCol in the global rebuttal material.
**Q2**: Tiny and small size models are also suggested to exploit. Or the authors needs strong arguments why they did not do so.
**A2**: Tiny and small size models are not suitable for MIM training scheme. We seldom see any tiny or small size models in other transformer based MIM works, including but not limited to (MAE, BEiT, CAE, EVA, PeCo, SimMIM, etc.). In paper TinyMIM[1], the author pointed out when the model size is small, MIM pre-training can harm the fine-tuning accuracy on ImageNet-1k classification (Table 2 in [1]). Other methods (TinyMIM/EVA-02) which include extra teacher and use feature distillation in pre-training can make up this defect. Consider our work is trained with purely MIM object, we do not include any tiny or small size models.
[1] Ren, Sucheng, Fangyun Wei, Zheng Zhang, and Han Hu. "TinyMIM: An empirical study of distilling MIM pre-trained models." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3687-3697. 2023.
**Q3**: Some papers achieves better results than this method, but they did not present them and compare with them.
**A3**: Yes, some paper adopt mask distill method and use strong teachers/extra data/different training scheme in pre-training. Consider the space limitation, we cannot include all of them into the paper. We list one representation work here:
EVA-02 (EVA-02: A Visual Representation for Neon Genesis)
EVA-series adopt mask distillation pre-training scheme with a 1B parameter EVA-CLIP teacher and use a merged dataset consisting of IN-21K, CC12M, CC3M, COCO, ADE20K, Object365 and OpenImages as pre-training dataset. EVA02 also use ImageNet-21K intermediate supervised fine-tuning.
We conduct similar experiments which also include mask distillation pre-training scheme with 300M parameter OpenCLIP-ViT-L teacher and pre-trained on Laion-400M dataset. We do not apply any intermediate fine-tuning.
We show the experimental results in Table 1 in the additional one page global rebuttal material.
Our model does not achieve better performance than EVA-02 because of the inconsistent training settings: 1) Pre-train dataset, 2) Strong teacher, 3) Intermediate fine-tuning. But RevColV2 achieves better performance than all other methods. We give details about the method, training, and results on the global rebuttal, please refer to it.
In the original paper, we do not adopt such training scheme for we hope to use simple training scheme(ImageNet-1k MIM pre-train+ supervise fine-tune) to show the effectiveness of our method for research purpose. Strong teachers, additional data and tricks certainly make strong performance. However, in that case we can not determine whether the progress is attribute to our method itself or other tricks.
---
Rebuttal Comment 1.1:
Title: Read the rebuttal and discuss with authors ASAP
Comment: Dear reviewer DHnY,
Since you are the only who holds the negative attitude, your opinion is super important. Can you help read the rebuttal ASAP and discuss with the authors and the reviewers. Feel free to raise any new concerns if you have.
Best
Area Chair
---
Rebuttal Comment 1.2:
Comment: Thank you for your response. My concerns are all addressed, and I intend to change the rate to Borderline accept. | Summary: The paper proposes a revised version of RevCol, referred to as RevColV2, which is applicable for MAE training. RevColV2 consists of an encoder-decoder framework. The encoder is the same as RevCol, while the decoder uses reversed column connections. The paper also uses a unified fine-tuning framework utilizing a decoder for downstream tasks. RevColV2 demonstrates impressive performance on diverse vision tasks.
Strengths: - RevColV2 decoder presents an interesting design with plenty of novelty. In particular, reversed column connection between enc-dec is an innovative architectural approach for MAE training.
- RevColV2 achieves meaningful performance improvements on diverse tasks.
- Multi-column architecture is different from the mainstream of transformer architecture, which enhances the novelty of RevColV2
Weaknesses: - Basic component of the architecture is the same as RevCol. RevColV2 is an improved version, not a new architecture. Although it is interesting, the contribution of V2 paper is limited.
- The paper's contribution is similar to ConvNeXt V2, enhancing existing architecture with MAE training and minor architecture revision. I think shedding light on existing architecture can be a contribution. But, similarity with ConvNeXt V2 paper might decrease the impact of this paper.
- There are no reports for the throughput or latency. FLOPs numbers for detection and segmentation are omitted. I think throughput and FLOPs are necessary for architecture research in 2023. I strongly recommend authors to report those numbers.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Although the inverse pass is well described in the main section, I don't know where it is used in the experiments. What is the role of the inverse pass in RevCol V2?
- IN22k intermediate fine-tuning is an interesting case. It would be better if RevCol V2 is compared with IN21k training in the following papers.
- [1] BEIT V2: Masked Image Modeling with Vector-Quantized Visual Tokenizers
- [2] DeiT III: Revenge of the ViT
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer xDT5,
Thank you for your valuable feedback. We will address the concerns and answer them below.
**Q1**: Although it is interesting, the contribution of V2 paper is limited.
**A1**: RevColV2 is a new macro design that handles the inconsistent representations between pre-training and fine-tuning by keeping the entire auto-encoders during pre-training and fine-tuning. As you described in the 'Strengths' section, "RevColV2 decoder presents an interesting design with plenty of novelty.", the top-down decoder in RevColV2 is artistically designed to learn disentangled representation in MIM pre-training. We think this new paradigm and its impressive performance are valuable to the research community.
**Q2**: The paper's contribution is similar to ConvNeXt V2, enhancing existing architecture with MAE training and minor architecture revision. I think shedding light on existing architecture can be a contribution. But, similarity with ConvNeXt V2 paper might decrease the impact of this paper.
**A2**: Although both RevColV2 and ConvNeXt V2 are manifested in enhancing existing architecture with MAE training, the motivations behind these are different. ConvNeXt V2 aims to handle the problem of MIM training in modern CNN architecture, while RevColV2 tries to solve the inconsistent representation between pre-training and fine-tuning (it is not mentioned and handled in ConvNeXt V2). Besides, RevColV2 is not only a contributor to the previous version, but also a pioneer to explore disentangled representations in MIM pre-training.
**Q3**: There are no reports for the throughput or latency. FLOPs numbers for detection and segmentation are omitted. I think throughput and FLOPs are necessary for architecture research in 2023. I strongly recommend authors to report those numbers.
**A3**: Thanks for your suggestions. We omitted the FLOPs numbers because we have limited spaces and we found some papers also omitted the FLOPs on downstream (such as HiViT). Nevertheless, we will add this comparison in the further revision.
For example, the FLOPs number on the semantic segmentation task are:
| Model | FLOPs |
| ---------- | ----- |
| MAE-B | 2342G |
| RevColV2-B | 1008G |
| MAE-L | 4779G |
| RevColV2-L | 2990G |
RevColV2 models have lower FLOPs numbers on UperNet semantic segmentation because the number of channels dimension is smaller (the UperHead is more light-weight) and the UperHead contributes a large proportion of FLOPs.
Besides, we give a detailed benchmark of the speed of RevColV2 on the global rebuttal, please refer to that.
**Q4**: Although the inverse pass is well described in the main section, I don't know where it is used in the experiments. What is the role of the inverse pass in RevCol V2?
**A4**: Inverse pass is a basic component of RevColV2 similar to the V1 version. In the pre-training and fine-tuning of RevColV2, we do not save the intermediate feature maps except the last column during the forward pass. When performing backward, we use the inverse pass to recompute the feature maps and gradients of previous columns using the last column features. Thus the memory cost is lower compared with normal training (You can refer to the paper of RevCol v1 for more details). We will make it more clear in the revision.
**Q5**: IN22k intermediate fine-tuning is an interesting case. It would be better if RevCol V2 is compared with IN21k training in the following papers (BEIT V2 and DeiT III).
**A5**: Thank you for your suggestions. BEiT V2 uses vector-quantized visual tokenizers which are trained with an additional CLIP model. We think directly comparing with the CLIP-based method is not fair. For DeiT III which only uses ImageNet dataset without additional knowledge, we will add the comparison to this method. We also show the comparison results here:
ImageNet1K only:
| Model | Param. | FLOPs | ACC |
| ----------- | ---- | --- | ---- |
| DeiT III -B | 87M | 18G | 83.8 |
| RevColV2-B | 88M | 19G | 84.7 |
| DeiT III -L | 304M | 62G | 84.9 |
| RevColV2-L | 327M | 67M | 86.3 |
ImageNet1K + 22K:
| Model | Param. | FLOPs | ACC |
| ----------- | ---- | --- | ---- |
| DeiT III -B | 87M | 18G | 85.7 |
| RevColV2-B | 88M | 19G | 86.2 |
| DeiT III -L | 304M | 62G | 87.0 |
| RevColV2-L | 327M | 67M | 87.4 |
We also investigate the data scaling propriety of RevColV2 using an additional CLIP model and compare it with other CLIP-based foundation models. The scaling details and results (including the comparison to BEiT V2 and other counterparts) can be found in the global rebuttal, please refer to it.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
It has addressed some of my concerns.
But I still want to know the latency or throughput of RevColV2.
Could you provide that information?
If the depth of RevColV2 leads to inefficiencies in GPU computation, I would also accept the throughput for a 128-batch on GPU or latency on CPU.
I believe throughput/latency comparison is essential for network architecture papers.
---
Reply to Comment 1.1.1:
Title: Further response
Comment: Dear Reviewer xDT5, thanks for your discussion.
We agree that the analysis of latency and throughput of RevColV2 is essential for network architecture design. In global rebuttal and its additional PDF file, we show the latency of RevColV2 and ViT baseline on a single A100 GPU with batch size 32. The latency of RevColV2 is higher than ViT baseline because of the frequently fragment memory access. Given the same number of FLOPs, the key problem of the low speed is the number of blocks (depth) of RevColV2 models. The original RevCol V1 had realized this phenomenon and we have tried to design a shallower depth network than the V1 version, but still slower than vanilla ViT. Here, we supplement more analysis on the impact of batch size. We show throughput (#image/s) under the different batch size of RevColV2-L and ViT-L on a single A100 GPU.
|| bs=16 | bs=32 | bs=64 | bs=128 | bs=256 | bs=512 |
| ---------- | ----- | ----- | ----- | ------ | ------ | ------ |
| RevColV2-L | 432 | 629 | 661 | 697 | 721 | 741 |
| ViT-L | 730 | 754 | 786 | 811 | 820 | 823 |
| speedup | 0.591 | 0.834 | 0.841 | 0.859 | 0.879 | 0.900 |
The results show that with the increase in batch size, the inference speed gap between RevCol-L and ViT-L is closing because the fragment memory access time can be distributed to each sample. Although the speed of RevColV2 is lower than vanilla ViT, we still think it can be solved by advanced techniques such as kernel fusion and pipeline parallel of multi-column networks. We will add these analysis along with the global rebuttal about speed (throughput and latency) in the next revision. We hope this response can ease your concerns and please let us know if you have any questions. | Rebuttal 1:
Rebuttal: Deal all,
We thank all reviewers' efforts in the comments of our submission. The original review comments recognised our novelty (xDT5, yT3P, W68W) and motivation behind RevColV2 (6koM, DHnY), and acknowledged the performance of RevColV2 (xDT5, yT3P, W68W). While the main concerns of reviewers are focused on the speed (6koM, xDT5, yT3P, W68W), the scaling property of RevColV2 (6koM, yT3P), the additional ablations (6koM, yT3P), the performances (6koM, DHnY) and other detail issues.
In the rebuttal period, we conducted additional experiments and further analyzed the computation cost of RevColV2. We added more comparisons to modern architectures and methods (DeiT III, PeCo, BEiTV2, MaskDistill, EVA02) and additional ablations (encoder only for downstream tasks). We give an analysis of the computation cost for RevColV2. We further investigate the data scaling property of RevColV2 with the help of an additional CLIP model and a larger dataset. For all concerns and questions, we made detailed explanations point-by-point.
We hope our responses can clarify the issues of the reviewers. If there are any other questions, please feel free to ask.
**Global responses:**
**1. Scaling**
We further investigate the scaling propriety of RevColV2 with the help of an additional teacher. The main idea of RevColV2 is learning the disentangled representation during pre-training to keep the entire autoencoders in fine-tuning. This is accomplished by reconstructing the masked image patches on the bottom level of the top-down column decoder. The semantic features are accordingly disentangled to the top levels. We make a further step: explicitly joint-learn masked semantic features on the top-level of the top-down column decoder. Specifically, we use OpenCLIP-L as the teacher to represent the semantic features similar to MaskDistill and EVA.
Except for the additional teacher, we use a larger dataset Laion400M, which contains about 400M unlabeled images in pre-training. Note that we do not use datasets such as COCO, ADE20K, Object365, etc. in pre-training to avoid artificial fitting to specific distribution (this is different from EVA-02 which uses a merged dataset that has overlapped data in the downstream task).
We use 800 ImageNet-1k epochs on Laion400M dataset and then 300 epochs on ImageNet-1k dataset during pre-training. Then we evaluate our model on downstream tasks such as ImageNet1K classification, COCO detection with cascade Mask-RCNN, and ADE20K semantic segmentation with Mask2Former. The newly trained RevColv2-L achieves 87.7\% Top-1 accuracy in ImageNet-1k classification with $224\times224$ input resolution. The larger dataset and the extra teacher lead to better performance compared with purely IN-1k MIM pre-training (86.3\%) and IN-1k MIM + IN-22k intermediate fine-tuning (87.4\%). The performance gain is more prominent on dense prediction tasks. Please see Table 1 of the global rebuttal for more experimental results.
These results verify the data scaling ability of RevColV2, and we hope the RevColV2-L with mask distillation can become a new foundation model in the vision community.
**2. Speed**
We are aware of the current model variants of RevColV2 introduce more latency compared with other works of the similar number of parameters and FLOPs, such as ViT.
We test the inference latency of variant models in Table 2 in the additional page. As described in Speed of [RevColV1](https://openreview.net/forum?id=Oc2vlWU0jFY¬eId=eots0qdyEv)
, fragmented memory access takes a large part of latency. In RevColV2, we made some improvements: 1) remove the up-sample and down-sample operation in RevColV1; 2) reduce the number of total blocks; 3) hard-ware friendly architecture without hierarchy. As shown in Table 2 in the rebuttal PDF file, RevColV2 has lower latency than the V1 version during inference, but is still 1.21x higher than ViT. This is because of the large number of building blocks in RevColV2-L (about twice of ViT-L). Though we reduce the total number of blocks, the multi-column RevColv2 still requires at least 12 blocks in each column in the encoder. Shallower column leads to coarse representation which could harm the performance. On the other hand, if we make the ViT model deeper and maintain the same FLOPs, ViT-L-deeper (48 blocks) and RevColV2-L (48 blocks) have similar latency.
In addition to the above comparison, the fragmented access of memory can be optimized by some techniques which can be further investigated in further work. Here, we give two ways that may be further studied:
- Kernel fusion. This can reduce the frequent access of the memory caused by a large number of blocks.
- Model parallel. Before the calculation of previous columns is finished, parts of the current column can be calculated in parallel. This is the nature of the multi-column network and can be further studied to speed up the inference and training.
Pdf: /pdf/03a8a090cd908e72938aba9eaeac4cc464462075.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper introduces "RevColv2," an advancement over the RevCol model, enabling compatibility with MIM training. The authors propose the new architecture comprising a bottom-up reversible column encoder and a top-down decoder, facilitating MIM compatibility while preserving disentangled low-level and semantic information throughout the network. The authors conduct experiments on ImageNet, detection, and segmentation tasks and the results show it achieve strong results on ImageNet and ADE20K.
Strengths: • The idea of RevColv2 to make RevCol compatible with MIM and the design of top-down column decoder is intuitive. It’s nice to see some downstream tasks could benefit from both pre-trained encoder and decoder.
• The Illustration of key motivation of maintaining disentangled low-level and semantic information is clear and further verified by analysis in Figure 3.
Weaknesses: * The results on ImageNet-1K are strong; however, there is a lack of speed comparison with other methods. It would be valuable to assess the runtime speed for both pre-training and fine-tuning stages, particularly considering the impact of the reversible column network design on speed.
* One benefit for reversible networks is memory-saving (at the cost of some speed). It would be beneficial to discuss whether this holds true for RevColv2. Exploring the trade-off between memory usage and speed for RevColv2 will add valuable insights to the paper.
* In figure 2, the sequence length is different for the same level in the encoder and decoder. It seems unclear about the strategy used to handle this dimension change when connecting the encoder to the decoder.
* For dense prediction tasks, RevColv2 utilizes both encoder and decoder pre-trained weights. To verify the effectiveness of this approach, one missing ablation is to compare to a variant that employs the same encoder and decoder during downstream fine-tuning but only utilizes the encoder's pre-trained weights while initializing the decoder weights randomly.
* While the COCO detection results for the base model are strong, the performance of RevColv2-L appears to lag behind ViTDet-L using Mask R-CNN (54.0 vs. 55.6, citing the results from the ViTDet paper). Additionally, no results for RevColv2-L with Cascade Mask R-CNN are reported. It would be insightful to discussions the scaling results for RevColv2 on detection and provide some intuition on the potential reasons behind the observed performance differences.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Overall, the authors propose RevColV2 to make RevCol network compatible with MIM training and it achieves consistent better results than RevCol. My main concerns are about some missing analysis/discussions for some ablations and results as listed in the weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No limitations have been discussed. Suggestions on potential limitation that worth to discuss could be about the speed issue and further scaling the model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 6koM,
Thank you for your valuable feedback. We will address the concerns and answer them below.
**Q1**: The results on ImageNet-1K are strong; however, there is a lack of speed comparison with other methods. It would be valuable to assess the runtime speed for both pre-training and fine-tuning stages, particularly considering the impact of the reversible column network design on speed.
**A1**: We analyze the speed of RevColV2 and the impact of reversible columns in the global rebuttal which mainly focuses on the model inference. As for pre-training and fine-tuning speed, it will draw the same conclusion with which in the global rebuttal. Using reversible in training is similar to checkpoint, which recalculate intermediate activations during backward propagation. Compared to checkpoint, using our reversible backward saves more GPU memory, so that we can use larger batch size to speed up training. In addition, the running time of training is heavily dependent on the computation resources.
**Q2**: One benefit for reversible networks is memory-saving (at the cost of some speed). It would be beneficial to discuss whether this holds true for RevColv2. Exploring the trade-off between memory usage and speed for RevColv2 will add valuable insights to the paper.
**A2**: Yes, this still holds true for RevColV2. Reversible column is the basic component for RevColV2. In the forward pass, we do not need to save the intermediate features. In the inverse pass, we can recompute the feature maps according to the last column outputs. This means the memory cost is significantly lower. In our practice, it only uses about a quarter of the memory cost of the non-reversible counterpart on RevColV2-L. However, due to the feature re-computation and our vanilla implementation, the overall speed that using reversible forward-backward pass and non-reversible forward-backward pass has about the same time consumption on a fixed number of samples during pre-training. But the reversible network support to running on limited computing resources, such as RTX 2080Ti with 11G memory.
**Q3**: In figure 2, the sequence length is different for the same level in the encoder and decoder. It seems unclear about the strategy used to handle this dimension change when connecting the encoder to the decoder.
**A3**: For downstream tasks, the sequence lengths between encoder and decoder are the same. For pre-training in Figure 2, these dimensions are different because of the masking strategy. We use the same technique way as MAE to align the sequence length. In the encoder, only visible unmasked patches are used as input. In the decoder the input is the full set of tokens consisting of both encoded visible patches and mask tokens. Line 112 of the original submission had described this practice and we will make it more clear.
**Q4**: To verify the effectiveness of this approach, one missing ablation is to compare to a variant that employs the same encoder and decoder during downstream fine-tuning but only utilizes the encoder's pre-trained weights while initializing the decoder weights randomly.
**A4**: This is a good point. We made experiments that only utilized the encoder's pre-trained weights while initializing the decoder weights randomly with RevColV2-B. This variant achieves 84.4\% (-0.3\%) Top-1 accuracy on ImageNet-1K dataset and 50.7 (-0.6) mIoU on ADE20K dataset, with only ImageNet-1K MIM pre-trained encoder weights. These experimental results draw the same conclusion with the paper that the pre-trained decoder is necessary for RevColV2. We will add this ablation experiment to the paper.
**Q5**: While the COCO detection results for the base model are strong, the performance of RevColv2-L appears to lag behind ViTDet-L using Mask R-CNN (54.0 vs. 55.6, citing the results from the ViTDet paper). Additionally, no results for RevColv2-L with Cascade Mask R-CNN are reported. It would be insightful to discussions the scaling results for RevColv2 on detection and provide some intuition on the potential reasons behind the observed performance differences.
**A5**: In the original submission, we reproduce the results of the ViT backbone on Mask R-CNN detection framework using the different hyper-parameters compared to ViTDet-L due to limited computing resources. Thus the resulting RevColv2-L with Mask R-CNN shows sub-optimal performance.
We investigate the data scaling ability of RevColV2. Specifically, we propose a new learning paradigm for RevColV2 that jointly models the masked image patches (on the top level of the last decoder column) and CLIP features (on the bottom level of the last decoder column) during pre-training. The resulting model shows very impressive results on downstream tasks. More details and results are available in the global rebuttal, please refer to it.
---
Rebuttal Comment 1.1:
Title: Response to authors' rebuttal
Comment: Thanks to the authors for the rebuttal with additional explanations and experiments. The rebuttal addressed some concerns, but I still have questions regarding Q1/Q2 and Q5.
Q1/Q2: The authors provided some inference speed comparison. However, but I remain curious about the unavailability of training speed comparisons. Given that RevColv2 apparently employs the same batch size as MAE (4096), I believe it would be feasible to measure the training speed and memory usage under the same settings. I think that this comparison would contribute to understand the trade-offs in RevColv2 when comparing with other methods.
Q5: It’s nice to see some data scaling results of RevColv2 in Table 1 (rebuttal file). However, Table 1 is more like a system-level comparison as different methods are using different pre-training dataset or teacher model. Conversely, I think a fairer evaluation would be a comparison of models on the same setting, e.g., in Table 4 (the original paper).
.
---
Reply to Comment 1.1.1:
Title: Further response
Comment: Dear Reviewer 6koM, thanks for your discussion. We make some further responses to your concerns.
**Q1/Q2**: The authors provided some inference speed comparisons. However, but I remain curious about the unavailability of training speed comparisons. Given that RevColv2 apparently employs the same batch size as MAE (4096), I believe it would be feasible to measure the training speed and memory usage under the same settings. I think that this comparison would contribute to understand the trade-offs in RevColv2 when comparing with other methods.
**A1/2**: Thank you for your suggestion. We made some further analysis on the per-training speed and memory cost, and compare it with the popular used ViT-MAE baseline. We take RevColV2-B and ViT-B for comparisons. We test training speed and memory cost on a single A100 (80GB) x 8 machine, with the same dataloader (implemented for our cluster). We use our own codebase for RevColV2 and the official codebase for MAE. The below table shows the real training cost with batch size 4096 for one epoch. To speed up training and save memory, we equip RevColV2 with Flash Attention. We only use data parallel in this testing.
| | Time Cost | Memory (each GPU)|
| ----------------------------------- | ---------- | ------ |
| ViT-B | 220s/epoch | 43G |
| RevColV2-B | 249s/epoch | 49G |
| RevColV2-B + FlashAttn | 211s/epoch | 42G |
| RevColV2-B + FlashAttn + Reversible | 240s/epoch | 18G |
The above table shows that the vanilla implementation of RevColV2 pre-training has a little slower (249s vs. 220s) than ViT. Equipped with FlashAttn, RevColV2 achieves comparable pre-training cost (211s vs. 220s and 42G vs. 43G). We understand that ViT could also be equipped FlashAttn to speed up pre-training. So, we further analyze the impact of reversible propagation. We test the pre-training cost of the Reversible version of RevColV2 (re-compute the intermediate features during backward according to the last column outputs, rather than the vanilla autograd function in PyTorch). Results in the above table show that RevColV2-B can use extremely few GPU memory (only 18G) during pre-training with a total batch size 4096. This allows RevColV2 can be pre-trained with limited resources, such as RTX3090 GPU.
We will add this analysis to the next reversion and we hope this response can ease your concerns. Please let us know if you have any questions.
**Q5**: Table 1 is more like a system-level comparison as different methods are using different pre-training dataset or teacher model. Conversely, I think a fairer evaluation would be a comparison of models on the same setting, e.g., in Table 4 (the original paper)
**A5**: In our investigation of data scaling on RevColV2, we hope to explore the ability upper bound of RevColV2 models. So, we mainly focus on a larger dataset and a strong teacher (this motivation is similar to EVA series). Table 1 is indeed a system-level comparison and shows the ability of RevColV2 models. Behind these experiments, we had made some initial basic experiments in the beginning to verify this new scaling training schema: (1) the same dataset (ImageNet1K), teacher (CLIP-B), and settings (300 epochs) with MaskDistill [1] for RevColV2-B. We validated these schedules on ImageNet1K fine-tuning (2) the pre-training models on Table 1 in global rebuttal, but the exactly the same settings with EVA-02 / ViTDet on COCO detection (1024 x 1024 image size with LSJ argumentations, we use 1536 image size in Table 1 same as EVA-02). We show these results below.
| Model | pre-training | teachaer | ImageNet1K ft |
| ------------- | ----------------------- | -------- | ------------- |
| MaskDistill-B | ImageNet1K - 300 epochs | CLIP-B | 85.0 |
| RevColV2-B | ImageNet1K - 300 epochs | CLIP-B | 85.5 |
| Model | image-size | AP |
| ---------- | ---------- | ---- |
| VIT-L | 1024x1024 | 57.6 |
| EVA-02 | 1024x1024 | 59.2 |
| RevColV2-L | 1024x1024 | 59.5 |
According to these results, RevColV2-B with the same dataset, teacher, and training settings achieve better performance on ImageNet1K fine-tuning compared with MaskDistill-B. We think this head-to-head comparison verifies the effectiveness of the data scaling with CLIP teacher schema for RevColV2. For results on COCO detection, we think it verifies the ability of data scaling pre-training. We hope this response can ease your concerns and please let us know if you have any questions.
[1] Peng, Zhiliang, et al. "A unified view of masked image modeling." arXiv preprint arXiv:2210.10615 (2022). | null | null | null | null | null | null |
Multiplication-Free Transformer Training via Piecewise Affine Operations | Accept (poster) | Summary: This paper argues that multiplications are the main bottleneck in modern neural network training and inference, and proposes to reduce the cost by replacing them with a cheap piecewise affine approximation. This can eliminate all multiplications in the training and inference process as claimed.
Strengths: * I think that this work has its value. It is a new type of multiplication-free network following addernets, shiftnets, or shiftaddnets. And it can be applied to both inference and training.
* The author has implemented a customized CUDA kernel for supporting their claims. And the final results are similar to the original networks.
Weaknesses: * This paper claims to be the first work to reveal that a neural network has been trained entirely without standard multiplications. In my opinion, it overclaims as there are other multiplication-free networks with no multiplication involved during training or inference.
* The author only compares the accuracy but does not report the latency or efficiency metrics. That looks weird as the main motivation for adopting multiplication-free networks is to reduce the training or inference costs.
I also wonder about the speed comparison of the customized CUDA kernel and origin networks. Which will be faster (even for the FPGA case)? As in modern hardware, the computation is no longer the bottleneck as compared to data movements.
* How is the performance as compared to other multiplication-free/reduced networks in terms of accuracy and efficiency, e.g., addernet, deepshift, shiftaddnas, Ecoformer, and other binary neural networks?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Such a piece-wise linear approximation of multiplications can be seen as an approximation of a network. There are also other works that leverage piece-wise linear approximation to visualize and analyze the decision boundary or spline subdivisions. Are there any connections between these kinds of works? E.g., https://arxiv.org/abs/2302.12828, https://arxiv.org/abs/2101.02338
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: This paper gives an analysis of the limitation on the GPU speedups.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the time spent reviewing our submission as well as your feedback and suggestions! Below we try to address your concerns and questions.
**Other multiplication-free works** We apologize if this is an overclaim, that was definitely not our intention. We are aware of other works e.g. AdderNet that can be multiplication free at inference which we mention in the paper. For training specifically, we are not aware of other schemes that are 100% multiplication free. We would be grateful if you can provide a reference and will include it in the paper and tone our claims down accordingly. Regarding the specific references you mention later in your review, although they are impressive works and able to remove most multiplications, they do include some leftover multiplications during training. They seem to use standard optimizers with the associated multiplications and do not try to address the multiplication in e.g. normalization when used. AdderNet relies on Batch Normalization including the learnable affine transformation. Deepshift can eliminate the multiplications involving the weights but if we understand correctly the computation of the gradient for a weight requires standard matrix multiplication. We believe ShiftAddNAS inherits this from the other papers. EcoFormer seems to address the attention matrix multiplication specifically but leaves the others unchanged. We will add these as references when we discuss related work.
**Latency or efficiency metrics metrics** This is something we would have liked to include but the current lack of hardware support for these operations prevents us from giving a meaningful comparison. We currently simulate the arithmetic on GPUs using fp32 and int32 operations which will always be slower than the standard baseline. Note that this is not a limitation of the method itself, accurately simulating e.g. bfloat16 multiplication on hardware that only supports fp32 multiplications would also result in a similar slowdown. With proper hardware support we estimate a PAM operation to be around 5-10x cheaper than an fp16 multiplication in terms of energy and area (Appendix B). Other hardware elements required (that we do not focus on here) such as the accumulation and the memory will reduce the overall gains.
**Comparison with other methods** While we can’t do an exhaustive comparison with all these methods we do compare to AdderAttention which is an extension of AdderNet for transformers (Section 3.2, Table 2). The AdderNet approach is very interesting and deviates from the traditional networks in a more drastic way. However, the PAM approach replaces more multiplications (all matrix multiplications), uses a cheaper replacement operation (integer additions instead of floating point) while resulting in a higher accuracy. We discuss some differences with the other works in the “other multiplication-free works” paragraph.
**Memory Characteristics** Thank you for bringing this up. You are correct that our method only focuses on the cost of multiplication itself but other aspects of computation such as memory accesses also contribute to the overall cost. In the global response we discuss additional experiments that suggest that PAM is compatible with lower precision floating point formats that would result in memory savings as well as further computational savings. The large area savings from PAM could also be used for other purposes including additional memory.
**Relation to other piece-wise linear works** This is a very interesting connection, thank you for pointing this out. Although their methods can already handle standard multiplications, it seems they are limited to piece-wise linear non-linearities such as leaky ReLUs. The piece-wise affine approximations like those we use for normalization layers and softmax might expand the applicability of their work to a broader range of architectures such as transformers, although we are not sure about the computational tractability in practice.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thank the authors for the rebuttal. I will maintain my rating. | Summary: This paper proposes to replace all multiplications involved in a Transformer training process with bit additions of input floating-point representations. This is shown to be an approximation to the piecewise affine function that is again the approximation of common functions in Transformer training. Results show that this approximation will not cause obvious accuracy drop.
Strengths: The idea of multiplication-free training is very interesting, and, if true, is significant. The bit-addition as an approximation is concise on both algorithm and hardware side.
Weaknesses: My main concerns are detailed as follows:
1. There needs to be a better presentation in Section 2.2 on how piecewise affine multiplication (PAM) is reduced to bit-addition. At least the paragraph of line 106 is not obvious to the reviewer. For example, when EA=5, EB=5, SA=SB=MA=MB=0, the two floating-point numbers (now actually integers) A * B = 32 * 32 = 1024, but EA+EB+MA+MB+SA+SB = 10, which clearly doesn't equal and the results have a large gap.
2. In terms of performance (latency), is the bit-addition of floating-point representations (population count) better than multiplication? The Wallace tree implementation of multiplication has a time complexity of O(log(b)), seems like the same as the bit-addition.
3. Is the proposed method compatible with low-precision integer/floating-point formats? What would be the comparison between a quantized matmul and PAM?
4. The setup of results in Table 2 is confusing. It reports the training accuracy with only the matmul replaced with PAM, which is the same as the prior work Mogami (2020). What are the results with all layers replaced with PAM, especially with all optimizer ops replaced?
5. For Table 3 machine translation results, what are the model architecture details, e.g., number of parameters, layers, heads, etc.? What is the activation function used in the FFN (GELU or ReLU)? Is it replaced with PAM as well? It is also better to report loss values besides BLEU since BLEU is typically noisy.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Questions are listed together with the main concerns in the previous section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The reviewer is not aware of any social impact of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort in reviewing our paper! Below we try to address your concerns and feedback.
**(1) PAM as bit addition** We apologize that this can be misunderstood and propose rephrasing this paragraph as: We observe that Aˆ· B is roughly achieved by adding the signs (Equation 6), exponents (Equation 7), and mantissas (Equation 8) of A and B. If we add the floating point representations of A and B given in Equation 3 together as integers, the extra term of 1{M_A + M_B>1} in Equations 7 and 8 corresponds to an overflow of the resulting $\bar{M}$ into $\bar{E}$. Piecewise affine multiplication can therefore be performed by an int32 addition of the floating point representations, barring some technical details we discuss next.
We hope this clarifies that in your example the addition would be carried out as PAM([0, 5, 0], [0, 5, 0]) = [0+0, 5+5, 0+0] = [0, 10, 0] = 1024 which gives the exact result (this is generally the case when one operand is an exact power of two).
**(2) Latency** This is not our area of expertise so we apologize if we misunderstood your point or if there are errors in the following answer. If we understand correctly, a Wallace tree implementation concerns the implementation of an integer multiplication through a tree-like reduction of the partial products. The depth of the tree is O(log2(b)) where b is the bit width of the integers. Each addition in the tree operates on integer inputs with a width of at least b. The PAM operation (for floats) can be expressed as a single full width int addition (similar to one level of the Wallace tree) followed by a fix of the exponent (int8 addition) and handling for underflow / overflow of the exponent. In Appendix B we approximate the total cost as being around 2 full-width integer additions. If this holds, the latency could perhaps be modeled as around 2 levels of the Wallace tree resulting in a lower latency. Even if this turns out not to be the case, the resulting area savings would still be beneficial by freeing up area for other uses.
**(3) Lower precision floats** Yes, we believe the method should work with lower width floats such as bfloat16. We have added a global response to discuss additional experiments that support this. We believe for 16 bit formats PAM should be roughly 5-10x cheaper than standard multiplication.
**(4) Table 2** In Section 3.2 and Table 2 we only replace the matrix multiplication. This allows us to compare directly to AdderAddention, an alternative approach for replacing multiplications in transformer training, showing better performance while replacing more multiplications. In Section 3.3, 3.4 and Table 3 we extend this, studying the impact of replacing all operations individually and cumulatively. This is done on a computationally cheaper task allowing us to run different combinations and obtain error bars.
**(5) Table 3** In the manuscript we describe the setup used in this experiment in Lines 204-213. The network is based on the original transformer architecture and has 6 encoder and 6 decoder layers with ReLU activations. Thank you for the suggestion of additionally reporting the loss. It seems like a good idea in general but in this we think it could potentially be misleading since we use a different loss function for some of the table entries (i.e. the piecewise affine approximation).
We will incorporate your feedback, clarifying the points and adding the results for lower bit widths. If you feel we have sufficiently addressed some of your concerns, we would greatly appreciate it if you would consider raising your review score.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for addressing the comments. | Summary: The paper introduces a novel approach that replaces all multiplications in Transformer training with a cost-effective piecewise affine approximation achieved by adding the bit representation of floating-point numbers. This method allows for a full multiplication-free training of Transformer models, covering linear and nonlinear operations in the forward, backward, and optimization phases. The authors demonstrate that this approximation leads to minimal accuracy degradation across different training scenarios and provide estimates of the potential savings in terms of area and power requirements.
Strengths: * The paper introduces a new "multiplication-free training" scheme that can be applicable to Transformer models and other deep learning architectures.
* Despite its simplicity, the approximation method shows only small gaps and minimal accuracy degradation compared to real values.
* The paper provides a unique way of reducing training costs.
Weaknesses: While the underlying motivation of replacing all the multiplications with simpler operations throughout the entire training procedure, and the proposed method is appealing, it entails some concerns:
1. It looks like the general concept of piecewise affine approximation is not new and was introduced in Mogami. The authors stress (in L317-324) that the main difference from Mogami is the extension of the piecewise affine approximation to include forward, backward, and optimization, enabling a multiplication-free training procedure. However, the benefits of a multiplication-free network are not so clear for multiple reasons:
* (1-1) The benefit of applying multiplication-free approximation to non-linear operations is unclear. Non-linear operations are generally compute-bound and constitute only a small portion of the overall inference runtime. Since the proposed method can only reduce the compute cost (not the memory cost), the gain from carrying out those computations without multiplications can be minimal.
* (1-2) The suggested benefit of allowing the model to be deployed on hardware without multiplication support (L324) is unclear. The author should provide some real examples, or otherwise, it appears hypothetical. Furthermore, even if we were to design new multiplication-free hardware from scratch, it remains uncertain whether the advantages (in terms of saving area, power, and latency) would be substantial enough to outweigh the noticeable performance degradation from matrix-free optimization and loss computation, as indicated in Table 3.
That being said, the author should provide a more thorough comparison between computing linear operations without multiplication (as in Mogami - baseline) and computing the entire training process without multiplication to differentiate their work from prior research.
Otherwise, it seems more favorable to run Mogami's partial PAM scheme on hybrid hardware with small multiplication units (just so that they can support a few non-heavy linear operations), which would offer a better trade-off between runtime costs and accuracy, as compared to the proposed methodology.
* (1-3) Furthermore, since the proposed methodology is targeting training (where the device must support multiple models) rather than application-specific inference, designing new hardware would require flexibility. The proposed solution relies on multiplication-free hardware, which would restrict the broader applicability of training various/new model architectures (e.g. with new nonlinearity).
2. The training cost comprises both compute operations and memory operations. Although the authors provide a reasonable estimate of how the proposed method reduces compute cost, it does not mitigate memory cost as all values are still stored in 32-bit precision.
For instance, in Figure 7 of [2], which serves as a reference for Table 4 in the paper, it is demonstrated that memory operations consume approximately two orders of magnitude more energy than arithmetic operations even when loading from SRAM.
Given that the memory operations can constitute a large portion of the overall runtime cost and the proposed method is not so effective at reducing the cost of memory load/store, the estimate of the overall energy saving provided in the paper remains unclear.
i.e., If a significant portion of the end-to-end energy consumption comes from memory operations, the savings achieved through computation reduction might be minimal.
3. Several methodologies have been proposed for efficient training using reduced-precision approaches (such as bfloat16, which has already been settled down as the norm [3], or integer-only training [4] even though it is not for Transformer training), and this should also be considered as the baseline to compare against.
These methods not only decrease compute costs but also reduce memory costs.
In terms of end-to-end energy and latency reduction, would the proposed method offer greater benefits compared to these existing methods?
Considering the additional requirement for kernel/hardware design of the proposed method (versus reduced-precision training), the author should provide enough evidence that proves the overall gain to be considerably better than those methods to convey its practical value.
[1] Full Stack Optimization of Transformer Inference: a Survey, https://arxiv.org/pdf/2302.14017.pdf
[2] A Survey of Quantization Methods for Efficient Neural Network Inference, https://arxiv.org/pdf/2103.13630.pdf
[3] A Study of BFLOAT16 for Deep Learning Training, https://arxiv.org/pdf/1905.12322.pdf
[4] NITI: Training Integer Neural Networks Using Integer-only Arithmetic, https://arxiv.org/pdf/2009.13108.pdf
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Do authors have any insights on why the exact backward functions yield more training instability and worse performance than the approximated ones?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please see the weakness section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for your time and the feedback you've given. Below we try to address the main concerns you list one by one.
**Fully multiplication free (1)** You are correct that from a hardware perspective focusing on the matrix multiplications (on both the forward and backwards passes) may be sufficient to reap most of the benefits for certain network and hardware architectures. In this work we show an approach like this can work across a variety of architectures including transformers (which has not been shown previously) before extending it to fully multiplication-free training. We also release code to enable further exploration of this area.
Being fully multiplication free may in some ways be of more academic interest for now (for current hardware and architectures). We still believe it is a valid question to ask given the complete reliance of current hardware and training schemes on multiplications. We show that this does not need to be the case and training entirely without multiplications is in fact viable. Future networks and training schemes could enhance its practical relevance. For example, architectures with a more limited connectivity which skew the ratio of matrix computation to other operations might be of interest on new hardware. The benefit of depth-wise separable and grouped convolutions over their dense variants could indicate that this is the case. The loss function is the only significant contributor to the modest performance decrease from the fully multiplication free scheme and affects a network with standard multiplications the same way. Deviating further away from the standard loss function might mitigate this, e.g. using base 2 logarithms and exponents which we can approximate much better using the piecewise linear framework (we discuss this as a future direction in L345). The fully piece-wise affine approximations might also be of interest in other non-hardware related areas such as the network analysis brought up by Reviewer WjDg.
It is also correct that training hardware might need to support a variety of possible models. Hardware such as FPGA could potentially perform fully multiplication free training while still supporting a variety of architectures. The fully multiplication free approach also extends to inference applications where specialized accelerators may be more feasible.
**Memory Costs and Reduced Bitwidths (2) and (3)** Thank you for pointing this out. We have added an experiment (see global response) that suggests that PAM can be used with narrower formats such as bfloat16 and even fewer mantissa bits. This should give memory access and bandwidth savings similar to other approaches as well as further reduce the computational costs. The area savings from the cheaper multiplication could also be used for other purposes such as memory.
**Exact vs approximate bwd** The approximate backward functions seem to better approximate the gradients of true multiplication in some ways. The exact derivatives give unbiased gradient estimates (of true multiplication) but can deviate more at a given location. The approximate derivative is potentially biased but typically closer to the true multiplication derivative. Figure 3 (Appendix A) plots the two types of derivatives for a visual comparison. Since PAM approximates multiplication over the long term, the gradients of true multiplication could better describe how the loss surface changes over longer distances. These gradients, and by extension the approximate gradient, could therefore serve as a “smoothed” or denoised gradient that aids optimization.
**General remark** In this manuscript, we have explored the question of whether neural networks can be trained in a fully multiplication-free fashion. We believe this is an interesting academic question in itself and could also hold practical relevance, whether by focusing solely on matrix multiplications (as demonstrated for various new architectures) or by fully eliminating all multiplications. The space savings achieved from the more economical multiplication replacements could be allocated to other purposes, including bandwidth and memory improvements. While we don't address every issue involved in the bigger picture, we believe our results present a viable path for future hardware improvements and are of interest to the community. We will include the results for the narrower floating-point formats in an appendix. If you feel that we have addressed some of your concerns, we would greatly appreciate it if you would consider slightly increasing your review score.
---
Rebuttal Comment 1.1:
Comment: I appreciate the clarifications made in the rebuttal.
**Fully multiplication free**: I still find the multiplication-free scheme somewhat theoretical and academic for now, and adding concrete examples of hardware architectures would have strengthened the idea even further. Nevertheless, I agree with the author's claim about the potential advantages and future prospects of the suggested scheme.
**Memory Costs and Reduced Bitwidths:** We appreciate the authors for the added experiments/results, which will strengthen the submission.
Overall, some of my concerns about the paper have been addressed, so I have raised my score to 5. | Summary: This paper presents a method for training deep networks completely without multiplication, via approximating multiplication using piecewise affine operations. The authors show that their method can be used to train modern deep networks, including Transformers.
Strengths: The paper is **extremely** interesting. The ideas are great, and the findings are quite surprising. The results are preliminary but seem quite promising, and the idea is worth exploring in more depth. A future where all neural networks are trained without multiplication sounds very interesting and exciting!
Weaknesses: Modern deep networks are supported by extensive hardware support, such as tensor cores for matrix multiplication. This means that the algorithms proposed in this paper, although theoretically more efficient, are not more efficient in practice. The paper would be stronger if it were more upfront with these limitations, and measured wallclock of the algorithms on modern hardware. The paper would be stronger with this comparison upfront, especially since "Hardware-Efficient" is in the title (an alternate framing, such as "Multiplication-Free Transformer Training via Piecewise Affine Operations" would not suffer from this same weakness).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the wallclock runtime of this method? How would it change with the equivalent of tensor core support for this approximation?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limited discussion of wallclock characteristics on modern hardware.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time spent reviewing our submission and as well as your feedback and suggestions!
We agree that it would likely have been better to de-emphasize the hardware efficiency and focus more on the multiplication-free aspect. We are unfortunately unable to change the title here, but will give this a serious consideration in the case it is not accepted here.
We try to discuss the theoretical hardware benefits and current runtime in Appendices B, D and E. Due to the lack of hardware support for PAM operations on GPUs we simulate the PAM arithmetic using multiple INT32 and FP32 instructions for each multiplication. This results in runtimes that are several times slower than an FP32 baseline on the GPUs used. We still think this is a relatively good runtime for a detailed simulation of arithmetic that is not supported in the hardware and spent considerable time on the kernels to enable this. With proper hardware support and a similar tensor core implementation we estimate that the multiplication itself would be on the order of 5-10x cheaper in terms of hardware area and energy cost compared to FP16 (Appendix B). This would allow packing more processors etc on a given chip which should result in increased speeds / wallclock runtime. Other hardware elements required (that we do not focus on here) such as the accumulation and the memory will reduce this number but we believe these can be addressed using orthogonal methods such as narrower floating point formats (global response) and the accumulation approaches discussed in Appendix B.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! I recommend moving part of this submission to the main paper for the next version. | Rebuttal 1:
Rebuttal: # Global Response
We are grateful to the reviewers for their efforts, insightful comments and constructive suggestions. We respond to all reviews individually. In this response we discuss the compatibility of PAM with lower precision formats, a question raised by several reviewers. In the manuscript we mention that we believe PAM should work with narrower mantissas but did not perform experiments to validate this. Narrower mantissas would yield further computational savings as well as memory savings (including storage, read costs and bandwidth).
To address this concern we have performed an additional experiment where we simulate training with piecewise affine matrix multiplications (approximate bwd) using numerical formats with fewer mantissa bits than float32. We achieve this by rounding the inputs and masking the extra mantissa bits but do not change any other aspects of the training setup (i.e. no tuning or special low precision tricks). The internal accumulation is left unchanged similar to standard fp16-fp32 mixed precision. We focus on the training for two tasks, the IWSLT transformer from Section 3.2 and the VGG-13 CIFAR-10 network from Appendix C. The results can be seen in the table below (average±std for three runs).
For both networks, we observe minimal to no discrepancy when training with 7 bits (equivalent to bfloat16). Mantissas as narrow as 4 bits work well, providing a comfortable margin for bfloat16 and offering promise for extensions into very narrow formats like 8-bit formats (although these might necessitate additional techniques to accommodate the narrow exponent). However, a 3-bit mantissa noticeably impairs the transformer's performance and may marginally impact VGG training.
| Matmul Type | VGG-13 Test Accuracy | IWSLT14 BLEU Score |
| ----------- | ----------- | ----------- |
| float32 | 92.9±0.3% | 34.4±0.1 |
| PAM float32 | 92.9±0.2% | 34.2±0.2 |
| PAM bloat16 | 92.9±0.2% | 34.4±0.2 |
| PAM 4 bit mantissa | 92.9±0.5% | 34.2±0.1 |
| PAM 3 bit mantissa | 92.8±0.2% | 29.4±0.5 |
We will add these results to a new appendix section. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Evaluating Cognitive Maps and Planning in Large Language Models with CogEval | Accept (poster) | Summary: This paper describes a test battery to test for the emergence of cognitive maps and planning abilities in LLM. The tests are based on existing cogsci experiments converted into text prompts. For example, to test for a planning ability, the prompt first describes an apartment layout, and then asks the model to plan a path through several rooms to retrieve something. The authors show that LLM are poor at completing such tasks.
Strengths: I think the adaptation of cogsci planning experiments to text prompts is a great idea. There needs to be more tests like this to map the scope of LLM capabilities.
The authors evaluate multiple prompts from the same generative model and compute statistics of LLM responses. This approach stands in contrast to many arguments for emergent intelligence in LLM, which derive from anecdotal examples. It is important to do actual experiments with LLM.
Weaknesses: This is a presentation issue, but it affects my ability to understand and evaluate the paper. The writing is unedited. There are a lot of unnecessarily long sentences that can be condensed for pragmatics and readability. The grammar and punctuation are weird, like this might be an initial draft. I had to read the same content multiple times, and missed important details throughout the paper.
Figures are odd, seem to be just thrown in together from unrelated bits. For example Figure 1 shows examples of graphs A-E. Are these all the graphs used in Experiment 1? If so, this is not stated. Weirdly, each graph is shown in a different pictorial style. I'm sorry, but this looks like someone just downloaded different types of graphs from google search, panel E has a random red outline around it.
Experiment design is not given. I can not check the statistics without the experiment design. What are these degrees of freedom? It looks like multiple regression models were built? I think the intention is a good one, but it is not clear what was done
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1.
"
We evaluated the robustness of our findings by regenerating the results for each combination of factors
and parameters multiple times and applying a statistical model of how each of these factors contribute
to variance in LLM performance.
"
What does this mean? Where is the exact experimental design? How many trials were simulated? Which conditions were evaluated?
2.
Where is Experiment design for Experiment 1? Where are the graphs?
3.
"198 For the graph community block model, example graphs are shown in Figure 2 ..."
What does this mean? Does Figure 2 show all graphs, or a subset of them? If this is a subset, then how many graphs of each kind were shown?
Where is the "community block model" described?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: I can not make out what was done exactly, but am happy to read the next draft. The experiments and statistical analysis need to be described so that people could reproduce them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weaknesses
1. Thank you for this thoughtful suggestion. We have edited longer sentences and improved grammar, punctuation, as well as the figures.
2. Yes, all graphs are evaluated in experiment 1, each with 3 instantiations with different spatial and non-spatial domains (18 total environments), 30 measurements of 15 tasks for openAI models and 3 for the remaining 5: a total of 8910 measurements. Domains include spatial ordered rooms (room 1), general spatial (flower field), and social (e.g., Bob is friends with Alice) to explain the structure. The reviewer is correct that we copied the figures of the larger graphs from one of the author’s previous papers. Thanks to the reviewer’s excellent suggestions, we've improved Figure 1 with better graphics.
3. Experiment design: We thank the reviewer for this comment. We describe the experimental design in various portions of the paper but agree the reader would benefit from a concise summary. We elaborate on the design and the statistical approach taken below, and the reviewer can find all the prompts that were used in the prompt viewer link (https://cogeval.github.io/cogmaps/). We are happy to provide further information.
Our experiment is designed to assess zero-shot LLM behavior on planning tasks. The experimental factors and levels are as follows:
1. Large Language Model (LLM):
• OpenAI: GPT-4, GPT-3.5-turbo-175B, davinci-003-175B. Measurements per Factor Combination (MFC): 30
• Google: Bard. MFC: 3
• HuggingFace: BigScience Bloom-176B. MFC: 3
• Cohere: Cohere-52.4B. MFC: 3
• Anthropic: Claude-1-52B. MFC: 3
• Others: Pythia-20B, LLaMA-13B, Alpaca-7B. MFC: 3
2. Graph structure of the environment: A, B, C, D, E, F
3. Item Domain: spatial (numbered rooms), spatial (spaces), social ties
4. Conditions:
• Value-based planning & traversal (what’s the optimal path?)
• Reward revaluation
• Transition reevaluation
• Shortcut (with & without teleportation)
• Detour (with & without teleportation)
• Temperature: 0, 0.5, 1.0
It’s essential to note that this is an unbalanced design since the number of measurements differs across LLMs (30 for each of the OpenAI models and 3 per non-openAI LLMs).
Statistical Analysis: We used logistic regression to model the probability of success as a function of experimental factors. The outcome variable is a binary measure of whether the LLM successfully answered the question or not, given a combination of factors.
Given the repeated measures, logistic regression effectively accounts for the effect of each factor and their interactions on the probability of success. However, only one logistic regression was run to ensure a comprehensive and unbiased evaluation.
We hope this clarifies the experimental design and the statistical approach. Please see PDF for new table.
Questions
1. We hope our responses above address this concern & are happy to elaborate
2. Experimental design: All experiments were designed to test planning ability using different tasks in environments with different underlying structures and domains. All designs begin with explaining an environment (e.g., the rooms and connections in a castle, the social ties among people represented with names) with a Markov Decision Process (MDP) or graph, and rewards. Then a task follows. This task could be “traversal”: prompting the LLM to list the path from a given node to another; the “value path” condition: asking for the optimal path to highest rewards; “reward revaluation”: after responding to value path, the LLM is presented with a second prompt, in which a small local change in the magnitude of rewards is announced and optimal path is probed, to test robustness to changes in rewards; “transition revaluation”: after responding to value path, the LLM is presented with a second prompt with a small local change in the structure of the edges and the optimal path question; “detour”: after responding to value path, the LLM is presented with a second prompt, in which a specific previous path is blocked and the LLM needs to reroute and find a detour using the structure of the environment; “shortcut”: after responding to value path, the LLM is presented with a change in the structure that should reveal a shortcut, so the LLM is prompted for the optimal reward path again to test whether it can identify it.
3. “Where is the "community block model" described?” The block model is described in the submitted Figure 2. This is most easily seen looking to the figure to the far right of this image left. Each block (or community block) contains 5 vertices. In the image to the right, each separate color here represents its own “community” (there are three of them), where nodes have a specified likelihood of being connected to other nodes in the same block (probability of intra-connection). In this example image on the right, the likelihood of connection within a block is set to 100%, creating a fully connected clique (all vertices are connected to one another within the block.
This has now additionally been further clarified in the supplementary material as follows: “To systematically evaluate GPT-4's planning or graph traversal failure modes, we created a three-block community graph structures where each block contains five vertices. Using this approach, we vary the connection density within each community block and ask GPT-4 to perform reasoning tasks over each permutation of the graph structure as block density is varied.
Example graphs are shown in Figure 2 with the community graphs starting as simple line graphs on the left - representing the sparsest level of connectivity. We then create a new edge within each block for each iteration of the experiment until each community block forms a clique structure as seen on the right of Figure 2”. For the experiment, we simply vary the connection probability within a block and observe how the LLM performance varies as we move from a largely disconnected block (Figure 2, left) to a fully connected block (Figure 2, right).
---
Rebuttal Comment 1.1:
Title: Thank you for your responce!
Comment: I have increased my rating to a 5. | Summary: This paper evaluates LLMs on a set of tasks that could be solved by cognitive maps, such as goal-oriented planning, or incorporating shortcuts. The work finds that LLMs generally perform poorly at these tasks, and their performance is affected by features such as graph sparsity.
Strengths: * The paper is admirably thorough with experiments and analyses:
- Assessing a range of LLMs (including varying parameters such as temperature).
- Evaluating across a range of different graph structures, task paradigms, etc.
- Creating new stimuli to avoid dataset contamination.
- Performing regression analyses to determine how different features contribute to model performance, and describing the failure modes observed.
- These thorough results and analyses suggest that the conclusions are likely to roughly generalize to some extent, and help the reader to understand which features will affect performance.
* The results are interesting, there are a variety of patterns that could be investigated further.
Weaknesses: My primary concern is that the overall framing of the paper is misleading. In particular, the work is motivated with references to the cognitive and neuroscience literature on cognitive maps. This work performs draw strong conclusions from its experiments such as "no evidence for understanding cognitive maps or planning." Are conclusions such as these justified?
A key issue in analyzing AI in comparison to human or animal capabilities is determining where a performance failure originates: is it a lack of an underlying capability (such as the ability to form a cognitive map), or a more superficial performance issue? Several recent papers have emphasized this point from a cognitive science perspective, and argued that it is essential to ensure fair comparisons between AI and natural intelligence to draw accurate conclusions (https://www.pnas.org/doi/abs/10.1073/pnas.1905334117; for LLMs specifically see: https://arxiv.org/abs/2210.15303).
In that context, it's worth noting that the animal and human experiments cited involved a great deal more experience before the map was tested than the present experiments do. For example, Tolman's latent learning experiments involved the rats fully exploring the maze for multiple days before they were tested with a food reward at the end (and even then, performance continued to improve well after the rewards were first introduced). Or Schapiro's temporal community structure paper involved half an hour of exposure to transitions from the graph; that is, thousands of transitions from a graph with only 15 nodes. This a much denser sampling of experience than the current LLM experiments afford, and it is quite possible that some degree of repeated experience contributes to the ability of natural intelligences to form a cognitive map. The difference in learning conditions is briefly mentioned to in the limitations, but is quickly dismissed; however, the difference in experimental conditions is a fundamental challenge to concluding that LLMs are failing to form cognitive maps like animals/humans do.
Likewise, it is typical in cognitive science to report comparisons to chance-level performance, and an underlying ability is usually inferred from better-than-chance performance, even if that performance is imperfect. For example, in Tolman & colleague's studies, the rats continued to improve for several days after the reward was introduced (that is, their paths were not optimal on the first test), and the rats were clearly stitching together trajectories that they had observed (since there was only a single route through the maze, everything else lead to dead ends), but we still interpret their performance as showing latent learning. It would be useful to report chance-level performance across all conditions, and, in the case that a model performs better than chance across a broad range of conditions, that would suggest some underlying ability, even if it is imperfect. For example, GPT-4's performance seems reasonably high across most conditions in Table 2 (though certainly imperfect).
In addition, explicit comparisons to human performance on these tasks (presented exactly as they are presented to the language model), would strengthen the claim that language models are failing in a fundamental way that humans or animals would not.
These points seem critical to the overall framing of the paper, and also to much of the discussion. I therefore think the paper would be substantially improved by:
1) providing a more nuanced framing of the very interesting results, that suggests they emphasize some limitations of planning in LLMs, without making overly strong claims such as "no emergent planning" or "no evidence of cognitive maps"
2) Providing explicit comparisons to chance-level (and ideally human) performance to help contextualize the results.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: * In the discussion, what explicit experiment is referred to by "We observe that LLMs do better in problems where the entire trajectories are explicitly available in the text prompts, and they only need to piece together partial changes"?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: See weaknesses; I believe that the limitations of the experimental paradigm not matching the inspiration are not fully discussed, and that more generally the conclusions are not fully supported by the experiments presented.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough engagement with our work and thoughtful notes on the strengths and constructive suggestions.
Weaknesses:
1- Framing. We agree that nuanced language is more productive. Our goal was to test zero-shot planning behavior in LLMs. If accepted the camera-ready is “Evaluating Cognitive Maps and Planning in Large Language Models with CogEval”, removing "no emergent planning". We have also introduced nuance in various sections. We referred to “no emergent planning”, given LLM failure modes suggest they don’t “understand” how to use a cognitive map for planning, e.g., they hallucinate edges that don't exist, they fall into loops, or fail to use one-step paths (Supplementary figure 3, now in the main text). We did not intend to create a benchmark to compare against human behavior or chance but to compare planning across LLMs. We evaluate robustness in the face of changes against the baseline of simpler tasks and identify major failure modes.
2- Comparison to rodent/human experiments: We are thrilled that the reviewer has a deep understanding of cognitive maps and rodent experimental work.
We'd like to note that our goal here was to test whether LLMs have *zero-shot* planning behavior. While we agree with making the wording more nuanced, please note that our goal was not a comparison to human behavior.
Moreover, comparison to rodent learning would not be fair given rodents can learn but LLMs with frozen weights can’t learn, and zero-shot planning with language is inherently not comparable to rodent learning.
Meanwhile, in ongoing studies we are using non-zero-shot approaches to LLMs, in context learning (ICL) & scratchpad/CoT, to improve planning in LLMs. That said, we believe that these studies are not in the scope of the present paper. We acknowledge that investigating all these tasks in human behavior would be a fantastic but separate future contribution. We hope the reviewer appreciates our response.
3- Comparison to chance:
We thank the reviewer for this suggestion. We agree that providing an impression of a chance baseline to LLM performance would be an interesting contribution. However, we believe doing so would be non-trivial given our specific design.
3.1. defining chance is non-trivial: The examples the reviewer provided rely on binary & multiple-choice responses, where defining chance at 50% makes sense. However, it is less clear how to define chance level when we have asked for a trajectory to optimal reward. If we had used multiple-choice responses, we could quantify random chance as 1/k where k was the number of choices. We pose open questions to the LLMs and evaluate their responses. It is not clear how to enumerate a sample space of possible answers.
However, we believe that comparison of the same LLM across graphs and domains, as well as the comparison of different LLMs' performances on the same tasks/graphs/domains offer ample novel contributions to the field and satisfy cognitive- and neuro- science standards (some authors have worked in cog sci for decades). However, we have considered a number of possibilities below in hopes to align the reviewer’s thoughts with our thinking process on the non-triviality of a chance level.
Consider traversal from a given node to a destination node. For given a graph, one possibility is to calculate all pairs of “shortest paths”, but this reduces down to the algorithm used for betweenness centrality (which is n^2 * (log n) to compute) and it would produce an exponential number of possible paths. This means that the likelihood of randomly choosing the correct path will almost always be zero, which makes the use of random chance less meaningful.
3.2. We could also use a random walk algorithm with the following constraints:
• Randomly walk through the graph and report a success if the goal node is reached, failure otherwise
• No backtracking & no revisiting nodes. If a previously visited node is encountered again in the random walk, terminate walk as a failure.
• Apply uniform transition probabilities based on what transitions are permitted in each condition.
• Run repeated random walks, count the proportion of successes as a Monte Carlo estimator of the probability of success by random chance.
To illustrate, for the value-based planning conditions and transition revaluation conditions on graph A, a random walker that didn’t backtrack would have a *50% chance* of achieving the goal (the room with the most reward). However, once we introduce the shortcut/detour/teleportation conditions, the state graph becomes complicated. The simulation would become more complex for more complex graphs but this random walk Monte Carlo estimation approach for random success probability may be a sound baseline for evaluating LLM performance.
Questions: We appreciate the opportunity to expand on this.
In smaller graphs, the prompt already expands all the possible paths or trajectories. When there’s a change in the rewards or transition structure, the LLM only needs to change one thing in an already laid out path. However, in more clustered graphs only the one-step connections are laid out in the prompt, but not all paths or trajectories between any given two nodes. This means that the LLM needs to use the transition structure to unroll the trajectories and find the correct path, which is closer to the notion of planning in model-based RL and in cognitive science.
An observation that speaks to this is that performance on larger graphs is far worse than the smaller ones. It is not just the graph size, LLMs often perform worse on the graph with 15 nodes and 3 dense clusters compared to the 16-node (4-cluster) graph that has more nodes, but better cross-cluster connectivity. The difference: there are fewer paths among clusters in the Schapiro graph, making “planning” more relevant here. The difference is robust to prompt variation. Taken together, these findings support the claim and we are happy to discuss further.
---
Rebuttal Comment 1.1:
Title: Thanks for the improvements, and some follow up thoughts
Comment: Thanks to the authors for their thoughtful response. I've updated my score accordingly. Some follow-up thoughts below:
1. I believe this reframing will improve the paper.
2. In the context of the above reframing, this issue will likely be improved. However, I have some lingering concerns about the comparison, that depend on precisely how the paper is reframed. For example, if the authors still reference rodent + human studies in motivating the cognitive maps (which I think is a good thing!), then it would still be useful to highlight the distinctions between the experimental methods in that work (e.g. substantial experience, learning as the authors point out) and the zero-shot methods in this work. I understand that the goal is not to make direct comparisons to human behavior, but I think it will be hard to write a paper talking about cognitive maps and citing the prior literature without (implicitly, at least) drawing that comparison. Thus, I hope the authors will highlight these discrepancies in the paper.
3. I agree that chance level performance can be tricky to determine. Nevertheless, I think that including some such comparisons would help to situate the results in context.
- From my perspective, any chance metric that takes account of the graph structure (such as random walks on the graph) is not really the most appropriate chance-level comparison for these experiments, because it effectively presumes some kind of cognitive map (that is, if the model sampled from these distributions, it would thereby be fully respecting the constraints imposed by the graph structure). In a fully no-cognitive-map baseline I would expect chance to be sampling paths truly at random from the set of possible nodes, without respect to any spatial constraints (perhaps sampling without replacement, so that the set of paths is finite).
- Alternatively, the authors might suggest that recognizing pairwise constraints would be possible just from the experienced paths, and so an additional chance level baseline would be sampling from transitions observed in the prompt.
- I think comparing the models' ability to respect local dependecies to the above two chance level baselines would help to elucidate the extent to which the issues the model is facing stem from lack of *any* cognitive map (i.e. not respecting local dependency constraints, or only respecting observed ones) vs. inability to plan over longer distances. I think that such a comparison would help to quantitatively clarify some of the qualitative claims in the current discussion.
- I do think it would be reasonable to also compare to a random walk over the graph with no repeats baseline; that would be more of a "cognitive-map-but-no-planning" comparison, which would also be useful (but I'd see the above as more valuable).
Questions. Thanks, this clarifies things.
---
Reply to Comment 1.1.1:
Title: Thank you and following up
Comment: 1- We appreciate it.
2- We agree with the reviewer that the difference between human experiments and our prompts for LLMs is important. We already noted this in the submitted manuscript under limitations, lines 298-304:
“in the human experiments that influenced our prompts, participants learn gradually, experiencing states one-by-one, but were only tested after they showed signs of learning, similar to a model-based RL agent having the transition structure and using it for inference and planning. To address this difference, we present the environment’s structure in linguistic format. In all cases, the participant or model had to identify the goal location based on instructions and infer the policy towards the goal, which is the room with the maximum reward.”
Second, on lines 318-319 we explicitly said that we use a functionalist notion of cognitive maps and planning and not a “human-like” notion:
“we evaluated emergent cognitive capacities in LLMs in a functionalist and multiple-realizablity sense rather than requiring any assumptions of them being "human-like" [31]”
Even Tollman’s original 1948 paper imbues cognitive maps with far more abstract & general intentions. Behrens et al. 2018's “What is a cognitive map?” defines cognitive maps as relational structures of knowledge, & Epstein et al. 2017 as “a unified representation of the environment to support memory and guide future action”. Consistently, we functionally test whether LLM behavior in planning tasks is consistent with having a unified representation of environment that can be accurately recalled for planning.
We described the structure of the env & asked questions to test whether LLMs can *extract a unified representation (cognitive map) and accurately recall it for flexible planning* & found that while some LLMs can list state-state transitions, when it comes to using this knowledge for planning they *hallucinate edges that don’t exist* (GPT-4: 25.57% responses on Schapiro graph were wrong due to hallucinations), *fall into loops*, or say irrelevant things. In a functionalist spirit we a) report this performance & classify failure modes, b) suggest that these failures are inconsistent with LLMs accurately using a unified representation of the environment (cognitive map) for planning. We agree with the reviewer’s broad comments & we've touched on them in the MS but can further clarify.
3- We appreciate the reviewer noting sampling from observed transitions in the prompt to check if they’re from the set. As shown in Supplementary Figure 2, now in the main MS, we visualize 3 failure modes of GPT-4. Among them hallucinating edges that don’t exist speaks to this. Namely, for Schapiro graph 25.57% of GPT-4 responses were incorrect because of hallucination. This is despite being able to list the tuples when asked as opposed to planning tasks. This speaks to a failure of “a unified representation of the environment for memory & planning”, or cognitive map, revealing planning failure due to inaccurate recall of transition structures. The reviewer’s point is thus relevant & helpful: “not respecting local dependency constraints” and inspired us to compute the %hallucination. If the reviewer agrees, we will add this to the paper.
We kindly note that many studies in the cognitive map literature do not use a chance level but a task condition as baseline (Garvert et al 2017; Momennejad et al 2017). Even Tolman & many rodent experiments don’t focus on chance levels but a condition as baseline (e.g., latent learning contrasts having vs. not having explored the environment before rewards are introduced). Similar to Tolman’s latent learning, a given LLM or condition can serve as a baseline in statistical analyses to Compare:
A) tasks with on (traversal) as baseline
B) LLMs (odds of success are 6.1464X higher for GPT-4 > alpaca-7b)
C) robustness to domain (baseline: spatial)
D) robustness to local changes (reward reval, detour, etc) & temperatures
Such analyses revealed how different factors (model engine, graph type, domain, task) & their interactions affect the odds of success for LLMs on each task.
We agree with the reviewer that random walks do not offer a satisfactory chance level. We believe that given the open multistep responses our design is not comparable to classification or experiments that require chance. In our opinion, artificially imposing nontrivial chance metrics might unfairly bias the interpretation to make it seem like LLMs do better. Moreover, LLMs are explicitly trained on next word prediction so randomly sampling any tuples does not seem like an appropriate baseline. Thus, we respectfully do not believe that random sampling is a fair comparison.
We hope our responses, rooted in over a decade of experience with cognitive maps, have addressed the reviewer’s concerns & are happy to clarify further. We hope our difference in optimism about LLM abilities does not impede the reviewer from raising the score! | Summary: The paper presents CogEval, a set of best practices ported from cognitive science on how to do behavioral evaluations. The authors also transcribe new tasks from human reinforcement learning and planning into text, such that LLMs can be tested on them. On these tasks, the authors do not observe evidence for an emergent capability for planning in large language models.
Strengths: - The presentation of CogEval is clear and potentially useful for the ML community. I’m excited to use CogEval in my own work.
- Thorough and thoughtful discussion.
- Exhaustive experimental setup—a feature perhaps enabled by the principled CogEval framework!
- The paper is lucid overall and easy to read.
Weaknesses: There’s some mild overselling in the paper, given the empirical results, which only demonstrate the failure of an existence proof. Defining the conditions under which we can declare definitively that there is no emergent planning. However, given that we have not rigorously defined these conditions, for example, “No Emergent Planning” in the title seems too strong.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Are you planning to release these tasks transcribed to text as a new testbed?
- Does transcribing these tasks to text make them harder? Perhaps a vision-and-language model like GPT-4 or Flamingo might perform better on pictures of the task, in which case, perhaps your conclusions would be different.
- Why is “Measurement and Evaluation for Large Language Models” capitalized in line 6?
Nits:
- Line 14: Stay in active voice here for consistency?
- Figure 3: Can you make this less blurry?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have adequately addressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are deeply grateful that the reviewer finds CogEval clear and thorough and are delighted to read they may potentially try it or use it in their work. We hope that we have addressed their helpful and constructive suggestions below and are more than happy to address any further feedback.
Weaknesses:
• We appreciate the reviewer’s suggestion and find it a fair assessment. In the camera-ready version, we plan to remove “no emergent planning” from the title. We have already begun editing the text to make the conclusions more nuanced in various sections. Our interim new title is “Evaluating Cognitive Maps and Planning in Large Language Models with CogEval”.
Questions:
• All the prompts are already available in the anonymous link we provided for the reviewers.
• • This is an excellent question, and we hope very much to test this in the future. Given our multiple tests and checks we know that GPT4 can extract the correct tuples from the instructions, both spatial and social. However, GPT4 fails at using these same tuples during planning with three main failure modes, please see Supplementary figure 2, also included in the rebuttal PDF. If accepted, this figure will be moved to the main manuscript. These failure modes include 1) hallucinating edges that are non-existent, 2) taking unnecessary moves that unnecessarily lengthen the path, even when there’s a 1-step transition available that GPT4 has captured from the description, and 3) falling into loops. The question of the influence of a visual LFM on each and all these failure modes is surely of interest. Would visual reasoning improve all, some, or worsen? We think pursuing the reviewer's suggestions could be an excellent follow-up study.
• The reviewer is correct, this is inherited from an older abbreviation prior to CogEval. We have fixed it per the reviewer’s suggestion.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for acting on my few suggestions. The authors more accurately represent their results. | Summary: This paper proposes an evaluation of large language models with respect to their ability to solve problems that require use of latent cognitive maps. Evaluation focuses on different underlying graph structures, and the influence of chain-of-thought inference on performance.
Strengths: * I really liked the very clear outline of the motivation of the research design in the introduction of Section 2.
* The experiments are extremely thorough, and the goal of designing experiments with statistical robustness in mind is good.
* Good discussion around the capabilities and limitations of LLMs
Weaknesses: Some of the presentation could be refined:
* The description of Figure 1 is somewhat difficult to understand. References to future aspects of the paper (e.g., "Experiment 3") are undefined, which makes it more difficult to understand.
* The figures / tables should appear closer to where they are referenced in the text.
* I'd suggest reordering 2.1, so that the tasks (i.e., maze learning) are described before the experimental setup.
* Some details could use more context. e.g., what is temperature? What are the graph structures A/B/C... etc?
* The discussion on BFS/DFS prompting should be moved to the experimental setup section (2.3). I also didn't quite understand the distinction between these two; an example would help.
* Formatting of Figure 3 can be improved
* What is "dialogue" referring to throughout the paper? I don't believe an actual multi-turn dialogue is taking place during the evaluation
* Text of Table 2 is really tiny
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * I am not completely familiar with the term "cognitive maps", and I'm not sure if this more of a metaphor, or usually applied to actual spatial reasoning tasks (hence "map"). Since the examples in the paper are about navigation, I am wondering if this means the domains studied are mostly about literal maps, or if there are other domains usually studied with the framework of cognitive maps. Are there other domains covered in this study? If so, what are they?
* If there are different domains, how does performance vary across them?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their positive evaluation of our work as well as their careful and constructive questions and suggestions. Please see our responses below. We hope that we have addressed any concerns and are happy to address further questions in the discussion period as well.
Weaknesses:
1. We have now improved Figure 1 caption for clarity.
2. Great suggestion, we reorganized the figures and hope to show it in the camera ready.
3. Appreciated, we have now accommodated this.
4. Temperature in LLMs determines randomness in the generated response through the softmax function. This manipulates the probabilities of the next word in a sequence. Because of this, temperature can be thought of as a parameter controlling the diversity of the output. When temperature is set to 0, this results in deterministic or greedy responses with less variance (Note: OpenAI has made it known that even temperature=0 is not entirely deterministic, though this is as close as you can get). Higher temperatures, especially closer to 1, create more diverse and varied text upon repetition. While this is helpful for tasks that may require varied responses or creativity, it’s not great for responses that require precision such as planning trajectories. We are more than happy to add this to the supplementary or integrate a summary in the main text if the reviewer sees fit.
The graph structures are the latent structures of the problems discussed in the paper.
5. Thank you. Breadth First Search and Depth First Search instructions are provided in the supplementary (lines 55-75), and we have pasted them below to address the reviewer’s question.
5.1. BFS (Breadth First Search) instruction:
“Think carefully before you respond. You can try using Breadth-first search (BFS), it is a graph traversal algorithm that visits all the vertices of a graph in breadth-first order, starting from a given source vertex. In BFS, vertices are visited in layers, where the vertices at distance 1 from the source vertex are visited first, followed by the vertices at distance 2, and so on. BFS uses a queue data structure to keep track of the vertices to be visited, and it ensures that no vertex is visited more than once. BFS is useful for finding the shortest path between two vertices in an unweighted graph, or for exploring all the vertices in a graph.”
5.2. DFS (Depth First Search) instruction:
“Think carefully before you respond. You can try using Depth-first search (DFS), it is a graph traversal algorithm that visits all the vertices of a graph in depth-first order, starting from a given source vertex. In DFS, the algorithm traverses as far as possible along each branch before backtracking. DFS uses a stack data structure to keep track of the vertices to be visited, and it ensures that all vertices connected to a visited vertex are explored before backtracking. DFS is useful for finding cycles in a graph, for exploring all the vertices in a graph, or for finding a path between two vertices. However, unlike BFS, DFS does not guarantee that the shortest path is found.”
6. We agree, and have updated Figure 3 accordingly, which you can find in the PDF. Additionally, we have audited some of the prompts and can provide further explanation as needed. All prompts can be found in the interactive tool with the prompts Chatbot Visualization (cogeval.github.io).
7. While we don’t use the term "dialogue" in the manuscript, in tasks that test robustness of LLMs to changes in rewards or transition structures, there is a 2-step question. Following Momennejad et al. 2017, 2018, First, the graph is explained and the LLM is probed for the optimal policy, then a partial change is described and the LLMs is probed a second time for the optimal policy. The correct response to the second question requires the integration of information in the first and the second prompts.
8. We have now changed the table size.
Thanks again for the helpful suggestions, and we are more than happy to address any further feedback.
Questions:
1. Thank you for the question. In lines 39-51 in the original manuscript, we discuss what a cognitive map is and in 52-62 we explain why LLMs would show that capacity. Briefly, the term was coined in cognitive science by Tolman’s 1948, where he reviewed decades of planning and navigation research using mazes. It refers to the relational structures that are stored in memory, representing a map of the state-space (which can be spatial or non-spatial, e.g., social relations) that is held in the mind rather than externally, hence the phrase "cognitive map". The specific structure of this map, whether it is one-step, multi-step, multi-scale, etc. has all been a matter of research over the past century. Since Tolman, the neural underpinnings of cognitive maps won a Nobel prize and as Tolman intended it, many studies discuss cognitive maps in terms of non-spatial maps (e.g., social maps, associative relational structures) as well. Many models have been proposed, with representation learning and RL being of special relevance, including debates over whether the map is Euclidean or not. We are happy to clarify further if lines 39-62 (and references 5, 1, 9) are unclear or add further sections in the supplementary.
2. Thank you for this helpful question. We have now provided a figure to address this constructive question in the attached PDF.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal and apologies for not responding earlier. It has answered my questions and I would still like to see this paper accepted. | Rebuttal 1:
Rebuttal: Dear reviewers,
We are deeply grateful for your careful and detailed review of our work as well as your constructive questions and suggestions. Given the space constraints we have tried to address your questions as best as we could and have provided further material in the *attached PDF*. We hope that we have addressed all your comments and are more than happy to engage in further discussion. We hope to be able to show you how these comments have changed the paper overall in the camera-ready version.
Thank you,
The authors
Pdf: /pdf/2d574ff2651b9af0579343fc7c4cf25c364e21b7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
AdaptSSR: Pre-training User Model with Augmentation-Adaptive Self-Supervised Ranking | Accept (poster) | Summary: The paper tackles user-oriented tasks, e.g. personalized recommendation, and proposes a self-supervised method named AdaptSSR to replace the contrastive learning pre-training target task. It adopts a ranking loss that selects samples of smallest similarity differences and assigns dynamic weight coefficients to ranking parts based on the estimated similarity between the augmented views. Experiments on 6 downstream tasks from 2 datasets and several empirical analyses are conducted to verify the effectiveness of AdaptSSR.
Strengths: The pre-training method and objective are clearly explained, and the equations in the text concisely demonstrate the proposed loss function.
Weaknesses: The Multiple Pairwise Ranking loss, which is the core of the method, is not an original contribution of this paper, but an adaptation from Yu et al [49]. However, there is almost no mentioning of this work except for the source of the loss function, casting doubt on the novelty of the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please elaborate on the differences between the approach proposed in this paper and [49], and highlight the contribution of this paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have not discussed the limitations and broader societal impacts in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are sincerely grateful for the time and effort you have invested in reviewing our paper. In response to your insightful comments, we have provided detailed explanations and clarifications, which are enumerated below.
**Weakness and Question 1**: The Multiple Pairwise Ranking loss, which is the core of the method, is not an original contribution of this paper, but an adaptation from Yu et al [49]. However, there is almost no mention of this work except for the source of the loss function, casting doubt on the novelty of the paper. Please elaborate on the differences between the approach proposed in this paper and [49], and highlight the contribution of this paper.
**Response**: First, our work differs from Yu et al [49] in the following aspects:
1. **The studied problems are completely different.** Yu et al [49] aim to train a better collaborative filtering model for item recommendation based on users' implicit feedback, while we aim to tackle the semantic inconsistency problem between the augmented views when pre-training a discriminative user model based on user behavior sequences.
2. **The definitions of the target ranking order are completely different.** Yu et al [49] aim to model the order of the user's preference difference between (I) an observed item and an unobserved item, (II) two unobserved items, and (III) two observed items, while we train the user model to capture the similarity orders between (I) the implicitly augmented views, (II) the explicitly augmented views, and (III) views from other users.
3. **How to construct the training triplet is different.** Yu et al [49] just randomly sample items from the corresponding item set, while we design an explicit hard negative sampling strategy to facilitate model training, which selects the pair with the smallest similarity difference for each pairwise ranking order.
4. **How to fuse the two learned pairwise ranking order is different.** Yu et al [49] use a fixed and unified hyperparameter $\lambda$ in the original multiple pairwise ranking loss, while our augmentation-adaptive fusion mechanism takes the distinct impacts of data augmentation on different behavior sequences into account and employs a dynamic coefficient $\lambda_i$ for each training sample $S_i$. The value of $\lambda_i$ is calculated based on the estimated semantic similarity between the augmented views along the training procedure.
The only connection between our work and Yu et al [49] is that we both try to simultaneously learn a ranking order between three terms. That's why we adopt the Multiple Pairwise Ranking loss for model training in our work. We will highlight the difference between our work and Yu et al [49] in the revised version of our paper.
Second, the main contributions of our paper are as follows:
1. We identify the semantic inconsistency problem when applying contrastive learning to user behavior sequences.
2. To tackle this problem, we escape the existing contrastive learning framework and propose a new augmentation-adaptive self-supervised ranking task. Instead of simply training the model to distinguish the positive augmented views from the negative ones as contrastive learning, we train the user model to capture a more precise and realistic similarity order between the implicitly augmented view, the explicitly augmented view, and views from other users.
3. Different from contrastive learning which simply maximizes the similarity between the augmented views for every sample in a fixed way, we further design an augmentation-adaptive fusion mechanism that adaptively adjusts the similarity order constraint applied to each sample based on the semantic similarity between the augmented views.
4. Extensive experiments on both public and industrial datasets verify that our AdaptSSR can be applied to different kinds of user behavior sequences and bring significant performance improvement to various downstream tasks.
To our best knowledge, our work is the first to tackle the semantic inconsistency problem when applying contrastive learning in the user modeling domain, and our method can be further generalized to other domains where augmentation choices are not straightforward or could alter the semantics of the data. We will highlight our contributions in the revised version of our paper.
**Limitation 1**: The authors have not discussed the limitations and broader societal impacts in the paper.
**Response**: Thanks for pointing it out. We have discussed the limitations of our work in the Appendix due to the space limit. We will put them in the main content in the revised version of our paper.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my questions. I'm maintaining my previous score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback. We are delighted to know that our responses have addressed your concerns. We sincerely appreciate your time and efforts in helping us improve the quality of our work. Thank you very much.
---
Rebuttal 2:
Comment: Dear Reviewer toy8,
Thank you for your insightful review comments. Considering the deadline for the author-reviewer discussion period is approaching, we are writing to follow up on our previous rebuttal submission and inquire if there are any remaining concerns or questions that we can address to improve the quality of our paper. We are open to constructive feedback and eager to work with you to improve our work.
Thanks,
Authors. | Summary: Recent studies have explored pre-training user models with contrastive learning tasks to address data sparsity issues in user-oriented tasks. However, existing augmentation methods may introduce noisy or irrelevant interests, leading to negative transfer. To overcome this, a new approach called Augmentation-Adaptive Self-Supervised Ranking (AdaptSSR) is proposed, which replaces contrastive learning with a multiple pairwise ranking loss. An augmentation-adaptive fusion mechanism is also introduced to combine learned ranking orders based on the similarity between augmented views. Extensive experiments demonstrate the effectiveness of AdaptSSR across various tasks and datasets.
Strengths: 1. The paper's motivation is reasonable, directly adopting contrastive learning may lead to consistency problems in recommendations.
2. This paper proposes a novel approach to combine implicit and explicit augmentations.
Weaknesses: 1. The main contribution of this paper is adding an order constraint in the loss function,
which is a rather incremental modification of existing contrastive learning framework.
The main idea of paper is a fusion of explicit augmentation and implicit augmentation by the loss function.
Thus, the novelty is limited.
2. It's unclear whether the added constraint is necessary. Since "$u$ and $u^+$ originate from exactly the same input behavior sequence" as the authors commented in line 123, I think $sim(u, u^+)\ge sim(u, u^-)$ and $sim(u, u^+) \ge sim(u, \hat{u}$ should always hold. I don't understand why we need the $sim(u, u^+)$ term here. Without the $sim(u, u^+)$ term, the proposed method reduces to common contrastive learning.
3. Even if the constraint is meaningful, the authors' analysis cannot convince me why such constraint may help generalization. Why may $sim(u, \hat{u}) > sim(u, u^+)$ harm the downstream performance? Intuitively, suppose the objective of original contrastive learning is overly strong, we should loose the constraints. For example, $sim(u, \hat{u}) \ge sim(u, u^-) - \epsilon$. However, the authors make the constraints even stronger by adding another constraint term. This does not make sense to me.
---
Edit after rebuttal: The authors' response resolved my primary concern about technical correctness. I'd like to raise my score to a borderline reject regarding the novelty of this work.
Technical Quality: 1 poor
Clarity: 1 poor
Questions for Authors: 1. Please explain what is implicit augmentation, how is $u^+$ generated, and why equation (1) should hold.
2. The authors introduce a Augmentation-Adaptive Fusion coefficient $\lambda$ needs further discussion. Is the $\lambda$ fixed during training, i.e. stop_gradient($\lambda$), so that the gradient calculation does not involve $\lambda$?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 1 poor
Presentation: 1 poor
Contribution: 2 fair
Limitations: Please address the issues highlighted in the Weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your time in reviewing our paper. However, we suspect that there may be certain misconceptions. In order to address your concerns, we provide detailed, point-by-point responses as follows. **Due to the character limit of each rebuttal, more responses are provided in the global response.**
**Weakness 1**: The main contribution of this paper is adding an order constraint in the loss function, which is a rather incremental modification of the existing contrastive learning framework. The main idea of the paper is a fusion of explicit augmentation and implicit augmentation by the loss function. Thus, the novelty is limited.
**Response**: We appreciate your feedback. Indeed, as you have pointed out, the introduction of an order constraint represents one facet of our contributions. However, it is important to clarify that this is not the sole contribution of our work and our method differs from contrastive learning in several aspects. Specifically, the main contributions of our paper are as follows:
1. We identify the semantic inconsistency problem when applying contrastive learning to user behavior sequences.
2. To tackle this problem, we escape the existing contrastive learning framework and propose a new augmentation-adaptive self-supervised ranking task. Instead of simply training the model to distinguish the positive augmented views from the negative ones as contrastive learning, we train the user model to capture a more precise and realistic similarity order between the implicitly augmented view, the explicitly augmented view, and views from other users.
3. Different from contrastive learning which simply maximizes the similarity between the augmented views for every sample in a fixed way, we further design an augmentation-adaptive fusion mechanism that adaptively adjusts the similarity order constraint applied to each sample based on the semantic similarity between the augmented views.
4. Extensive experiments on both public and industrial datasets verify that our AdaptSSR can be applied to different kinds of user behavior sequences and bring significant performance improvement to various downstream tasks.
Therefore, our method differs from contrastive learning in terms of the constraint applied to each sample and the design of the loss function. To our best knowledge, our work is the first to tackle the semantic inconsistency problem when applying contrastive learning in the user modeling domain, and our method can be further generalized to other domains where augmentation choices are not straightforward or could alter the semantics of the data. Overall, we think the contribution and novelty of our work are non-trivial.
**Weakness 2**: It's unclear whether the added constraint is necessary. Since "$u$ and $u^+$ originate from exactly the same input behavior sequence" as the authors commented in line 123, I think $sim(u,u^+)\geq sim(u,u^-)$ and $sim(u,u^+)\geq sim(u,\hat{u})$ should always hold. I don't understand why we need the $sim(u,u^+)$ term here. Without the $sim(u,u^+)$ term, the proposed method reduces to common contrastive learning.
**Response**: First, it is essential to clarify that although $u$ and $u^+$ originate from exactly the same input behavior sequence, different independently sampled dropout masks are applied in the user model, which adds distinct noise to the input sequence in the feature space. Therefore, $u$ and $u^+$ are different, and the pairwise similarity order $sim(u,u^+)\geq sim(u,u^-)$ and $sim(u,u^+)\geq sim(u,\hat{u})$ do not always hold.
In addition, the $sim(u,u^+)$ term is necessary as it works as the upper bound of $sim(u,\hat{u})$. Without this term, our method will apply the same constraint as contrastive learning and the model will directly maximize $sim(u,\hat{u})$ for every sample while neglecting the potential semantic inconsistency between the augmented views. Besides, our augmentation-adaptive fusion mechanism can further adjust $sim(u,\hat{u})$ between the upper bound $sim(u,u^+)$ and the lower bound $sim(u,u^-)$ properly for each sample based on the semantic similarity between the augmented views. Moreover, from the results in Table 2, we can find that our AdaptSSR consistently outperforms existing contrastive learning-based pre-training methods (e.g., CCL, CL4SRec, CoSeRec), which further verifies the effectiveness of the additional $sim(u,u^+)$ term.
**Weakness 3.1**: Even if the constraint is meaningful, the authors' analysis cannot convince me why such constraint may help generalization. Why may $sim(u,\hat{u})>sim(u,u^+)$ harm the downstream performance?
**Response**: We acknowledge that we may not have made this sufficiently clear in our paper, and we appreciate the opportunity to clarify. The reason why we require $sim(u,u^+)\geq sim(u,\hat{u})$ is that the difference between $u$ and $u^+$ is only caused by the different dropout masks applied by the implicit augmentation, while the difference between $u$ and $\hat{u}$ is caused by both the implicit augmentation and the explicit augmentation which directly modifies the input behavior sequence on the data level. If $sim(u,\hat{u})>sim(u,u^+)$, it means the user model cannot well capture the characteristics and interests of the user from the behavior sequence, and the generated user embedding cannot correctly reflects the similarity between different users. Thus, it will transfer incorrect prior knowledge to the downstream task and degrade the model performance, especially when the downstream task faces a data sparsity problem.
---
Rebuttal 2:
Comment: We appreciate your acknowledgment of our response and we are delighted to know that our replies have clarified the technical correctness of our paper. To further address your concern regarding the novelty of our work, we'd like to highlight that our work is the first to identify and tackle the semantic inconsistency problem when applying contrastive learning in the user modeling domain. Our AdaptSSR escapes the existing contrastive learning framework and provides a new pre-training schema that can be further generalized to other domains where augmentation choices are not straightforward or could alter the semantics of the data. We hope the contribution of our work can be recognized. If you have any other concerns, please feel free to reach out to us. We assure you that we will do our best to resolve any concerns you may have about this paper. | Summary: This paper proposes Augmentation-Adaptive Self-Supervised Ranking (AdaptSSR), a new user model pre-training paradigm, which alleviates the requirement of semantic consistency between the augmented views while pre-training a discriminative user model. Conventional methods assume that different views of the same behaviour sequence constructed via data augmentation are semantically consistent, while in practice existing augmentation methods tend to lose certain interests of the user or introduce noisy interests that the user does not have. AdaptSSR addresses this issue by adopting a multiple pairwise ranking loss which trains the user model to capture the similarity orders between the explicitly augmented views, the implicitly augmented views, and views from other users. An explicit hard negative sampling strategy and an augmentation-adaptive fusion mechanism are also introduced to facilitate model training. Extensive experiments on both public and industrial datasets verify the effectiveness of AdaptSSR.
Strengths: - The proposed approach is technically sound and the empirical results validates the effectiveness of the method.
- The paper is well-written with very clear figures.
- Code is available which makes it easy to reproduce the results.
Weaknesses: An important hyperparameter sensitivity analysis is missing: how does the value of $\lambda$ affect the model performance? Compared with existing models, AdaptSSR introduces an extra SimCSE-inspired implicit augmentation approach. It remains unclear in the paper if the performance improvement is primarily due to the introduction of implicit data augmentation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Only three random augmentation operators are used in the paper. Does introducing more augmentation operators help improve the performance? Do informative augmentation operations introduced by CoSeRec help improve AdaptSSR's performance?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The extra computational cost introduced by AdaptSSR is not analyzed in the paper, it would be useful if the authors can demonstrate the tradeoff between training time and performance for various models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We very much appreciate your positive opinions on the contribution and presentation of this paper. We also thank you for the valuable comments and our detailed responses are as follows.
**Weakness 1**: An important hyperparameter sensitivity analysis is missing: how does the value of $\lambda$ affect the model performance?
**Response**: Many thanks for this good question. $\lambda$ is a critical hyperparameter in our method since it controls how the two learned pairwise ranking orders: $sim(u,u^+)\geq sim(u,\hat{u})$ and $sim(u,\hat{u})\geq sim(u,u^-)$ are fused. As we mentioned in Line 171 of our paper, the effects of data augmentation vary significantly across diverse behavior sequences. As a result, a fixed and unified $\lambda$ is not enough to combine the two pairwise ranking orders properly for different samples. That's why we design an augmentation-adaptive fusion mechanism that replaces the hyperparameter $\lambda$ with a dynamic coefficient $\lambda_i$ for each training sample $S_i$. The value of $\lambda_i$ is calculated based on the average similarity between the user representations generated from the augmented views $\hat{S}_i$ and $\tilde{S}_i$ (Equation (7)) along the training procedure. As a result, we no longer need to manually set the value of $\lambda$, and the effectiveness of our augmentation-adaptive fusion mechanism has been verified in Section 4.4. We will highlight how we replace $\lambda$ with $\lambda_i$ in the revised version of our paper.
**Weakness 2**: Compared with existing models, AdaptSSR introduces an extra SimCSE-inspired implicit augmentation approach. It remains unclear in the paper if the performance improvement is primarily due to the introduction of implicit data augmentation.
**Response**: Many thanks for pointing it out. Actually, we have taken the impact of introducing extra implicit augmentation into account and compared our AdaptSSR with CLUE [1] in our experiments. It is worth mentioning that CLUE shares the same model structure with AdaptSSR but only uses implicitly augmented views for contrastive pre-training. From the results in Table 2, we can find that our AdaptSSR consistently outperforms CLUE by a large margin on various downstream tasks. We argue that it is because the implicit augmentation caused by the dropout mask alone is too weak. The user model can easily distinguish the positive samples from others, thus providing limited knowledge for downstream tasks. Such results illustrate that introducing extra implicit augmentation is not the primary reason for performance improvement. We will highlight the comparison between AdaptSSR and CLUE in the revised version of our paper.
[1] Mingyue Cheng et al. Learning Transferable User Representations with Sequential Behaviors via Contrastive Pre-training. ICDM 2021.
**Question 1**: Only three random augmentation operators are used in the paper. Does introducing more augmentation operators help improve the performance? Do informative augmentation operations introduced by CoSeRec help improve AdaptSSR's performance?
**Response**: Thanks for this insightful question. As we mentioned in Line 147 of our paper, our method can be combined with various data augmentation methods. We have evaluated the performance of AdaptSSR when combining it with several existing data augmentation methods in Section 4.3, including these informative augmentation operators introduced by CoSeRec. From the results in Figure 3 (the three dashed lines on top), we can find that our method achieves similar performance when combined with different data augmentation methods. We argue that this is because our augmentation-adaptive fusion mechanism can always properly combine the learned pairwise ranking orders based on the estimated similarity between the explicitly augmented views constructed by different augmentation methods, which leads to similar model performance. We will add this analysis to the revised version of our paper. Once again, we express our appreciation for your insightful feedback.
**Limitations 1**: The extra computational cost introduced by AdaptSSR is not analyzed in the paper, it would be useful if the authors can demonstrate the tradeoff between training time and performance for various models.
**Response**: We extend our gratitude for your insightful suggestions. The average pre-training time of different methods on the TTL dataset and the App dataset are listed in the following table. Our observations indicate that the average pre-training time of our AdaptSSR is similar to that of existing contrastive learning-based methods, such as CL4SRec and CoSeRec. Although AdaptSSR requires inputting each behavior sequence into the model twice, we implement it by duplicating the input sequences in the batch size dimension. Thus, we only need to input all the sequences into the model once, which can be well parallelized by the GPU. Besides, for each pairwise ranking order, our hard negative sampling strategy only selects the pair with the smallest similarity difference to compute the multiple pairwise ranking loss, which avoids the costly softmax operation and cross-entropy loss calculation in the existing contrastive learning task. As a result, our AdaptSSR will not greatly increase the overall computational cost while bringing performance improvement to various downstream tasks. We will add the result and analysis to the revised version of our paper.
\begin{array}{ccc}\hline\text{Pre-train Method}&\text{TTL}&\text{App}\\\\\hline\text{PeterRec}&2.927±0.022\text{h}&1.537±0.010\text{h}\\\\
\text{PTUM}&2.015±0.009\text{h}&2.055±0.018\text{h}\\\\\text{CLUE}&1.453±0.016\text{h}&1.633±0.015\text{h}\\\\
\text{IDICL}&1.257±0.021\text{h}&1.162±0.020\text{h}\\\\\text{CL4SRec}&1.868±0.013\text{h}&2.081±0.017\text{h}\\\\
\text{CoSeRec}&1.902±0.015\text{h}&2.104±0.023\text{h}\\\\\text{DuoRec}&1.535±0.020\text{h}&1.658±0.015\text{h}\\\\
\text{AdaptSSR}&1.539±0.017\text{h}&1.830±0.012\text{h}\\\\\hline\end{array}
---
Rebuttal Comment 1.1:
Comment: After reading the authors' rebuttal, most of my concerns have been properly addressed. Therefore I have raised my score. Thanks for the detailed explanation.
---
Reply to Comment 1.1.1:
Comment: Thank you for your appreciation and valuable feedback. We will incorporate the additional results and analysis into the final version of our paper. We sincerely appreciate your time and efforts in helping us improve the quality of our work. | Summary: The authors tackle the problem of doing self-supervised learning for user modeling. Inspired by the successes of contrastive learning approaches in the image setting, they adapt contrastive learning to the user modeling setting. However, in user modeling the augmentations typically used are not very suitable for contrastive learning because they can change the semantics of the data, thus forcing similarity between augmented views can be problematic. They instead produce three views: the anchor, a similar "implicitly" augmented view, and a less similar "explicitly" augmented view. The implicitly augmented view is trained to be more similar to the anchor than the explicitly augmented view. This escapes the problematic similarity training that plain contrastive learning would have in user modeling.
Strengths: 1. The paper was well-written and the diagrams were easy to understand. It made the paper easy to read and review. The contributions were clearly stated and explained in the paper.
2. The method is novel and original as far as I know. This method could be generalizable to other domains where augmentation choices are not straightforward and could alter the semantics of the data.
3. The method is well-designed: the ranking loss does help mitigate the "make semantically different augmented views the same" problem, and furthermore helps balance the focus of the loss between the implicit vs explicit contrast and the explicit vs other user contrast.
4. The improvements in the empirical results are consistent and seem to be significant.
Weaknesses: 1. Going back to the example where the user behavior is represented by a sequence of images, is it possible to just do per-image augmentation (choices for these exist and are widely used) and then perform a typical InfoNCE style contrastive loss on the user embeddings? For text one could do something similar using masking augmentations and such. I did not see a comparison to this baseline and I wonder how well it would perform. I think this is something that would be critical to compare against.
2. While the paper is written well, I think the paper should define what user modeling is and what the downstream tasks are earlier in the paper (or in the abstract). For a while it was not clear to me what problem the paper was trying to solve, as someone who has not worked on user modeling.
3. Adding error bars into the results tables would help in understanding the significance of the results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions are listed in the weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Seems sufficient.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate your careful reading and constructive comments. We also very much appreciate your acknowledgment that our proposed method is novel and well-designed. Following are our detailed responses to your comments.
**Weakness 1**: Going back to the example where the user behavior is represented by a sequence of images, is it possible to just do per-image augmentation and then perform a typical InfoNCE style contrastive loss on the user embeddings? For text, one could do something similar using masking augmentations and such. I did not see a comparison to this baseline and I wonder how well it would perform. I think this is something that would be critical to compare against.
**Response**: Thanks for your insightful comment. Integrating multimodal information related to user behaviors into user model pre-training is an interesting and promising direction. However, it is essential to clarify that the images depicted in Figure 1 are just used as illustrations of what each news article clicked by the user is about, so the readers can understand the impact of different augmentation methods on the behavior sequence more intuitively.
In this work, our methodology aligns with the prevalent settings in many previous works [1-4] and real-world scenarios, where each behavior is simply represented by an ID. We refrain from using any ancillary information related to user behaviors (such as the image or text description), so we cannot perform per-image or per-text augmentation. Sorry for the unclear description. We will provide a more explicit explanation of the images presented in Figure 1 in the revised version of our paper.
[1] Fajie Yuan et al. Parameter-Efficient Transfer from Sequential Behaviors for User Modeling and Recommendation. SIGIR 2020. \
[2] Mingyue Cheng et al. Learning Transferable User Representations with Sequential Behaviors via Contrastive Pre-training. ICDM 2021. \
[3] Xu Xie et al. Contrastive Learning for Sequential Recommendation. ICDE 2022. \
[4] Shuqing Bian et al. Contrastive Curriculum Learning for Sequential User Behavior Modeling via Data Augmentation. CIKM 2021.
**Weakness 2**: While the paper is written well, I think the paper should define what user modeling is and what the downstream tasks are earlier in the paper (or in the abstract). For a while, it was not clear to me what problem the paper was trying to solve, as someone who has not worked on user modeling.
**Response**: We appreciate your constructive suggestion. Indeed, user modeling aims to capture the user's characteristics or interests for a specific user-oriented task (e.g., personalized recommendation and click-through rate prediction) and encode them into a dense representation with a user representation model. As existing supervised user modeling methods tend to suffer from the data sparsity problem, our work aims to pre-train the user representation model on massive unlabeled user behavior data, which enables it to extract users' general characteristics or interests from their historical behaviors and can be transferred to benefit various downstream tasks. We will reorganize our paper and clarify the definition in the Abstract and Introduction section. Thank you again for your constructive comment.
**Weakness 3**: Adding error bars into the results tables would help in understanding the significance of the results.
**Response**: Thanks for the advice. We have added the standard deviation of each experiment to Table 2 in our paper, which is shown as follows (highlight in bold). Our further t-test results show that compared with the second-best method, the improvements of our AdaptSSR are significant at $p<0.01$ on every downstream task. We will add the results to the revised version of our paper.
\begin{array}{c|cc|cc|cc|cc|cc|cc}\hline\text{Pre-train}&\mathcal{T}_1&&\mathcal{T}_2&&\mathcal{T}_3&&\mathcal{T}_4&&\mathcal{T}_5&&\mathcal{T}_6&&\\\\
\text{Method}&\text{Acc}&\text{Impr\\%}&\text{Acc}&\text{Impr\\%}&\text{NDCG\@10}&\text{Impr\\%}&\text{NDCG\@10}&\text{Impr\\%}&\text{AUC}&\text{Impr\\%}&\text{AUC}&\text{Impr\\%}&\\\\\hline\text{None}&0.6287\bf{\pm0.0005}&-&0.5224\bf{\pm0.0016}&-&0.0199\bf{\pm0.0003}&-&0.0287\bf{\pm0.0007}&-&0.7863\bf{\pm0.0006}&-&0.7514\bf{\pm0.0014}&-&\\\\\text{PeterRec}&0.6362\bf{\pm0.0011}&1.19&0.5314\bf{\pm0.0007}&1.72&0.0237\bf{\pm0.0002}&19.12&0.0306\bf{\pm0.0008}&6.37&0.7961\bf{\pm0.0013}&1.25&0.7604\bf{\pm0.0010}&1.20&\\\\\text{PTUM}&0.6321\bf{\pm0.0014}&0.54&0.5305\bf{\pm0.0004}&1.55&0.0229\bf{\pm0.0003}&14.65&0.0296\bf{\pm0.0003}&3.17&0.7948\bf{\pm0.0011}&1.08&0.7582\bf{\pm0.0013}&0.90&\\\\\text{CLUE}&0.6338\bf{\pm0.0010}&0.81&0.5323\bf{\pm0.0005}&1.90&0.0238\bf{\pm0.0002}&19.62&0.0305\bf{\pm0.0021}&6.02&0.7990\bf{\pm0.0006}&1.62&0.7603\bf{\pm0.0016}&1.18&\\\\\text{CCL}&0.6376\bf{\pm0.0011}&1.42&0.5337\bf{\pm0.0009}&2.16&0.0243\bf{\pm0.0002}&21.93&0.0332\bf{\pm0.0013}&15.74&0.8022\bf{\pm0.0007}&2.02&0.7735\bf{\pm0.0010}&2.94&\\\\\text{IDICL}&0.6388\bf{\pm0.0004}&1.61&0.5345\bf{\pm0.0005}&2.32&0.0246\bf{\pm0.0002}&23.63&0.0342\bf{\pm0.0004}&19.01&0.8034\bf{\pm0.0005}&2.17&0.7792\bf{\pm0.0008}&3.70&\\\\\text{CL4SRec}&0.6371\bf{\pm0.0014}&1.34&0.5343\bf{\pm0.0005}&2.28&0.0241\bf{\pm0.0003}&21.12&0.0329\bf{\pm0.0006}&14.42&0.8014\bf{\pm0.0008}&1.92&0.7702\bf{\pm0.0005}&2.50&\\\\\text{CoSeRec}&0.6389\bf{\pm0.0003}&1.62&0.5353\bf{\pm0.0009}&2.47&0.0244\bf{\pm0.0002}&22.53&0.0333\bf{\pm0.0005}&15.77&0.8048\bf{\pm0.0006}&2.35&0.7771\bf{\pm0.0009}&3.42&\\\\\text{DuoRec}&0.6350\bf{\pm0.0009}&1.00&0.5326\bf{\pm0.0006}&1.95&0.0239\bf{\pm0.0001}&20.12&0.0311\bf{\pm0.0016}&8.32&0.8003\bf{\pm0.0009}&1.78&0.7685\bf{\pm0.0009}&2.28&\\\\\text{AdaptSSR}&0.6553\bf{\pm0.0004}&4.23&0.5441\bf{\pm0.0002}&4.15&0.0261\bf{\pm0.0003}&30.71&0.0373\bf{\pm0.0003}&29.77&0.8230\bf{\pm0.0003}&4.67&0.7992\bf{\pm0.0005}&6.36&\\\\\hline\end{array}
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I will keep my score after reading the rebuttals and the other reviews. Thanks for adding significance test results.
---
Reply to Comment 1.1.1:
Comment: We are sincerely grateful for your appreciation and valuable comments. We will revise our paper accordingly to incorporate your suggestions and the additional results. We really appreciate your time and efforts in helping us improve the quality of our work. Thank you very much. | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their appreciation and constructive comments. We have provided detailed responses to each reviewer's concerns and questions in the following rebuttals. We hope our responses will address your concerns and strengthen our paper. We are happy to respond to any new questions during the discussion period.
---
**More responses to Reviewer 8cxi**
**Weakness 3.2**: Intuitively, suppose the objective of original contrastive learning is overly strong, we should loose the constraints. For example, $sim(u,\hat{u})\geq sim(u,u^-)-\epsilon$. However, the authors make the constraints even stronger by adding another constraint term. This does not make sense to me.
**Response**: We appreciate your insightful thinking. However, the primary issue of contrastive learning does not lie in an overly strong objective. The problem is that it imposes a fixed and inaccurate constraint $sim(u,\hat{u})\geq sim(u,u^-)$ on every sample while the semantic similarity between the augmented views cannot be guaranteed. The InfoNCE-style contrastive loss will simply maximize $sim(u,\hat{u})$ no matter whether the augmented views are similar or not, which will lead to a negative transfer for the downstream task. Our method applies an adaptive similarity order constraint to each sample by adjusting $sim(u,\hat{u})$ between the upper bound $sim(u,u^+)$ and the lower bound $sim(u,u^-)$ based on the semantic similarity between the augmented views. We will refine our writing to make our motivation clearer.
**Question 1**: Please explain what is implicit augmentation, how is $u^+$ generated, and why equation (1) should hold.
**Response**: We appreciate your comment. We will respond to each of your questions in a sequential manner.
- As we mentioned in Line 99 of our paper, the implicit augmentation is performed via the dropout module in the model, which adds noise to the input sequence in the feature space.
- As we mentioned in Line 148 of our paper, given a behavior sequence $S$, we input it into the model twice with different independently sampled dropout masks and denote the generated user representations as $u$ and $u^+$.
- Since the difference between $u$ and $u^+$ is only caused by the different dropout masks applied by the implicit augmentation, while the difference between $u$ and $\hat{u}$ is caused by both the implicit augmentation and the explicit augmentation which directly modifies the input behavior sequence on the data level, we require the model to capture the similarity order $sim(u,u^+)\geq sim(u,\hat{u})$. Similarly, the difference between $u$ and $u^-$ is caused by both the implicit augmentation and the distinct interests of different users. Therefore, we require the model to capture the similarity order $sim(u,u^+)\geq sim(u,u^-)$. Since the explicit augmentation modifies $S$ at the data level and the semantic consistency between $S$ and $\hat{S}$ cannot be guaranteed, $sim(u,\hat{u})$ should be placed between $sim(u,u^+)$ and $sim(u,u^-)$, which leads to the final similarity order $\Gamma: sim(u,u^+)\geq sim(u,\hat{u})\geq sim(u,u^-)$ (Equation (1)). Our goal is pre-training the user model to adjust $sim(u,\hat{u})$ properly between $sim(u,u^+)$ and $sim(u,u^-)$ based on the semantic similarity between the augmented views.
**Question 2**: The authors introduce an Augmentation-Adaptive Fusion coefficient $\lambda$ that needs further discussion. Is the $\lambda$ fixed during training, i.e.,$\operatorname{stop\\_gradient}(\lambda)$, so that the gradient calculation does not involve $\lambda$?
**Response**: As we mentioned in Line 175 of our paper, our augmentation-adaptive fusion mechanism replaces the fixed and unified hyperparameter $ \lambda $ with a dynamic coefficient $\lambda_i$ for each training sample $S_i$. The value of $\lambda_i$ is dynamically calculated based on the average similarity between the user representations generated from the augmented views $\hat{S}_i$ and $\tilde{S}_i$ (Equation (7)) along the training procedure. It is not a learnable parameter so it is not involved in the gradient calculation. We will highlight how $\lambda_i$ is dynamically calculated in the revised version of our paper. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Autonomous Capability Assessment of Sequential Decision-Making Systems in Stochastic Settings | Accept (poster) | Summary: This paper introduces QACE, an algorithm for automatically learning the capabilities of sequential decision making agents through distinguishing queries. The problem is formulated in terms of the predicates and capabilities of an agent, which are assumed to be known a priori. QACE then aims to compute a transition model that encodes the probabilities that executing a given capability in a given state s will lead to state s'.
Strengths: **Originality:**
This is an original piece of work (novel combination of techniques) and related work seems to be adequately cited.
**Quality:**
The submission seems to be technically sound and the claims are supported. The authors discuss limitations of their work. The methods are appropriate but it could have been interesting to see a comparison against simpler baselines, such as against an approach that randomly generates queries.
**Clarity:**
The submission is well written. The process for generating distinguishing queries could be demonstrated through figures (added to the supplemental material, for example) that show how the tree is built and how the pruning is executed. Similar strategies could be used to provide examples of the non-deterministic model and how it is transformed into a probabilistic model. More detail on background could be provided, for example with sections (e.g., in the supplemental material) introducing FOND planning and PPDDL. Also, sometimes it feels like some claims could use a bit more explanation (e.g., question c), below).
**Significance:**
I think the results are important, however it is not clear to me who would leverage QACE or how. Is it the actual users? Is it robot designers/engineers?
EDIT:
I read the author's rebuttal, which helped clarify matters I had not fully understood.
Weaknesses: Please see above for strengths and weaknesses.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: **a)**
In the introduction, the authors claim that part of what makes this problem relevant is to allow users to understand what their robots can do and under what conditions. However it is unclear to me how QACE could help a user learn to take the most out of its robot. If instead QACE is meant to be used by a different stakeholder group, then I think this is not clear in the paper.
**b)**
Is a learning time of 20 or 30 minutes a big overhead for a user? Would they feel frustrated and give up from trying to learn how to use the robot? It is difficult to evaluate if the learning time of QACE is acceptable if there is no baseline against which to compare it.
**c)**
[lines 283-284] "we preempt this issue by creating a pool of states S that can execute the capabilities using a directed exploration of the state space using partially learned models.", it is not clear to me how having this pool of states can prevent the generation of queries to take "forever" (in cases where the hypotheses cannot be pruned directly). Also, what is a partially learned model in such cases and how do you get them and what do you mean with "directed exploration of the state space"?
**d)**
[Proof of proposition 1] Why is it not possible, according to Alg.1, for the two models to "either have different preconditions for c′ or different effects." Isn't this necessary for the FOND planning problem to have a solution? "the FOND planning problem ⟨M_ij , s_I_ij ,G_ij⟩, which has a solution if both the models have different precondition or at least one different effect for the same capability."
**Comments:**
An introduction to the basics of FOND planning is missing. Providing one in the supplemental material could be helpful.
Similarly, some background on PPDDL would help with Figure 2 and section 1.3 of the supplemental material.
Figure 5. Standard deviation of QACE is hard to see. For example changing colours could improve readability.
[minor details]
- [line 142] "model M that should ideally be same as T'" ==> **the** same as T'?
- [line 160] "formalae"
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discuss some limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for detailed review and suggestions. We plan to use the additional page to incorporate them including suggestions for the description of FOND and PPDDL models.
**Quality:** GLIB uses such a method that generates random traces and hence is used in our comparison as a baseline. We will clarify this in addition to GLIB's summary in the final version.
We would like to highlight that, to the best of our knowledge, no other existing approach is able to solve the problem we are addressing in this work. There are significant challenges in solving the assessment problem we address here due to which the closest available baseline GLIB has several limitations, and we provided the baseline with additional details to work for the setting we address. GLIB requires input goals that should satisfy certain conditions (should be conjunctions of at least three predicates that are not true in the initial state), so we manually provided it with such goals (for each of the 3-5 input problems per domain) for it to perform comparably. We also used the same set of hyperparameters that the GLIB authors used in their evaluation. Even with these changes, we would need the fixes mentioned in lines 352-356 for it to apply to our setting. We will clarify this.
**a)** (and **Significance**) As the reviewer correctly points out, we mentioned in lines 18-19 that in terms of an SDMA, we envision that lay users should be able to determine what an SDMA “can do, what effects their commands would have, and under what conditions?”. In this context, our approach (QACE) describes the capabilities of the robot in terms of predicates that the user understands (This includes novice users as well as more advanced users like engineers.) Understanding the limits of the capabilities of the robot can help with the safe usage of the robot, and allow better utilization of the capabilities of the robot. Indirectly, this can reduce costs since the robot manufacturer need not consider all possible environments that the robot may possibly operate in. The use of our system can also be extended for formal verification of SDMAs.
**b)** We think that a learning time of 20-30 minutes might seem excessive *iff* there is no output provided to the user by the system during the assessment process. Our system works by identifying the correct preconditions and effects, one <l,p> pair at a time. This could be streamed to the user and the user can immediately benefit from this information (which is guaranteed to be correct). Furthermore, assessment is an infrequent process and the system could be programmed to run assessment during large periods of inactivity, thus making sure that the user is always informed of any changes that could occur (e.g. coefficient of friction of wheels changing from wear-and-tear). As seen by our empirical evaluation, our approach can provide the user with a complete model significantly faster than other SOTA approaches that learn models.
You are correct that a baseline performing exactly the same work as ours is missing, and hence we believe this is a valuable contribution in the direction of the personalized assessment of SDMA systems.
Note that our approach can be used by standard explanation generators as they need an agent’s model. Existing methods for explanation generation [1,2] require such models as input. Those models are hard to obtain (as we also illustrate in this paper) and this approach generates those models when they are not available to the users to start with.
**c)** As mentioned in lines 279-284, an important step in pruning the hypotheses is to get a state where the SDMA can execute a capability. Generating such states is easier if we use directed exploration, which can increase the probability of encountering such a state. Once we have such a state, we can use a process similar to the one used in response of Q1 to reviewer *Jehr* above.
*Directed Exploration:* A partially learned model is a model where one or more capabilities have been learned (the correct preconditions have been identified for each capability and at least one effect is learned). We will clarify this. Once we have such a model, we can do a directed exploration of the state space for these capabilities by only executing a learned capability if the preconditions are satisfied. This helps in reducing the sample complexity since the simulator is only called when we know that the capability will execute successfully, thereby allowing us to explore different parts of the state space efficiently. Naturally, if a capability's preconditions are not learned, all of its groundings might need to be executed from the state.
In the worst case, to escape local minimas where no hypotheses can be pruned, we would need to perform a randomized search for a state where a capability is executable by the SDMA. But practically, as we observed in our empirical evaluation, using directed exploration to generate a pool of states gives at least one grounded capability instance. This ensures that during the query generation, the approach need not spend a long time searching for a state where a capability is executable.
**d)** It is not possible for the two models to "either have different preconditions for $c′$ or different effects” because the location $l$ corresponds to capability $c$ in line 124. According to Alg. 1, the models $M_i$ and $M_j$ are created such that they differ in precondition or effect of $c$ (depending on $l$). Now since $c \neq c’$ and the models already differ in precondition (or effect) of $c$, and the models $M_i$ and $M_j$ are exactly the same other than this difference. Hence $c’$ cannot have different preconditions or effects.
-----
*References:*
[1] Chakraborti et al. Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy. IJCAI 2017.
[2] Eifler et al. Plan-Space Explanation via Plan-Property Dependencies: Faster Algorithms & More Powerful Properties. IJCAI 2020.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering the questions.
I confirm that I have read the rebuttal.
I believe QACE may contribute to advancing the field and help model capabilities of black-box AI systems.
However, since I'm not an expert in the area, and I'm not familiar with FOND and PPDDL models. and current efforts in this domain, I cannot fully appreciate the technical details of the paper nor its insights and as such do not feel comfortable raising my score to a weak accept.
---
Reply to Comment 1.1.1:
Comment: Thank You for your review. We are glad that our response helped to answer your questions. | Summary: The paper proposes an algorithm for learning a probabilistic model of a black box agent's capabilities. The method assumes the existence of a vocabulary to describe the environment's state and the set of capabilities. The proposed method generates all possible hypotheses using three ways to add a predicate (as a condition, a negated condition, and not adding it). In the first instance, these hypotheses have no probabilities assigned. Using a sequence of distinguishing queries generated with a planner, they prune the version space (the possibly correct hypotheses). Lastly, the algorithm employs a frequentist estimation of the probabilities associated with the transitions.
The authors validate the algorithm empirically comparing it to a previous SOTA method.
Strengths: The strength of the paper is the careful exposition of an intuitive approach to the problem, along with the empirical evaluation claiming to surpass previous state-of-the-art algorithms.
Weaknesses: The weakness of the paper is the assumption that all possible hypotheses need to be generated. In real-life scenarios, this might be prohibitive.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The abstract mentions "evolving sequential decision making", but the methods assume the capabilities of the black box are fixed, and it doesn't discuss an evolving/adapting agent.
What is a FOND model? A short description would help.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, the authors discussed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review and suggestions. We address your concerns below:
**Weakness)** As mentioned in lines 177-181, there are just three hypotheses corresponding to any $\langle l,p \rangle$ pair. Hence the number of hypotheses to be considered are a small constant (=3) at any step in the algorithm.
----------
The answer to the individual questions are added below:
**Q1) Evolving SDM:** We wanted to focus that in case an SDM’s capabilities evolve; we would want to generate the descriptions of the SDM agent after the update. The model itself remains the same during the assessment process. We will clarify this point in the paper.
An example scenario of such evolving capabilities would be a robot that is initially deployed and assessed by QACE to yield a model $M_1$. After some time, the coefficient of friction of the wheels and gripper could change due to wear and tear, changing the model. The assessment process could be re-run (and in fact can be run using overnight batch updates since it is automatic, and handsfree requiring no human intervention) to capture this update to the model.
**Q2) FOND Model:** A FOND model is a fully observable non-deterministic model. Each capability has a precondition similar to the probabilistic model (lines 105-126), and an effect in the FOND model is also similar to the probabilistic model but without the associated probabilities. The capability shown in Figure 2, will be expressed as follows in a FOND model:
```
(:capability pick-item
:parameters (?location ?item)
:precondition (and (empty-arm)
(has-charge)
(robot-at ?location)
(at ?location ?item)
)
:effect (oneof
(and (not (empty-arm)) (not (at ?location ?item)) (holding ?item))
(and (not (has-charge)))
(and) # No-change
)
)
```
Here ```oneof``` in the effect represents that only one of the three effects will be applied on executing this capability.
---
Rebuttal Comment 1.1:
Title: Thank you for the clarifications
Comment: I thank the authors for their clarifications. Without being familiar with the field, I consider the current work solid enough to be published, but since I can't argue more for its impact in its field, I will keep my 'weak accept' suggestion.
---
Reply to Comment 1.1.1:
Comment: Thank you for your review. We are happy that our response helped in answering your queries. | Summary: The paper tackles the problem of modeling the capabilities of a block-box sequential decision-making agent (SDMA) by querying the SDMA agent along the way. The presented method (QACE) uses an active learning approach to interact with the block-box SDMA and learn an interpretable probabilistic model of its capabilities. The paper also presents a theoretical analysis of QACE showing that it can learn a model that is both complete and sound w.r.t the ground-truth model of the SDMA. QACE works using version space partitioning using the queries to remove inconsistent hypotheses until it converges on one model. In the final step, the learned non-deterministic model is converted into probabilities over the capabilities of the system -- via MLE done on the collected data via responses to the queries. QACE is evaluated in four different settings -- Cafe Server robot, warehouse robot, driving agent, and first responder agent. Results show that QACE is able to recover the underlying model of each environment almost always in a reasonable time. The method also outperforms extensions of SOTA methods such as GLIB-L and GLIB-A.
Strengths: The premise of the problem that the paper tackles is an interesting one. It is a very practically relevant problem as the framework outputs interpretable capability names of the SDMA and defines how each capability name can be invoked. The paper is written clearly and the running example of the Cafe-server robot makes it easier to understand the presented algorithm. The experimental setup also shows that the policy simulation queries $\eta (= 5)$ to learn distinguishing queries, are within a reasonable range, to be applied in practical environments. The completeness and soundness guarantees provide a stronger basis for the adoption of the algorithm. Overall, the problem setup is quite interesting, and QACE provides a new direction to reason about the capabilities of block-box SDMA in an interpretable manner.
Weaknesses: - On the scalability of QACE: From the paper, it was not clear as to whether QACE can scale to systems that have a larger space of predicate and therefore larger probabilistic problem domain description language. For systems with larger predicates, would the parameter $\eta$ be reasonable enough to estimate the response to a query?
- Extended Theoretical analysis: While completeness and soundness guarantees are provided, it would also be interesting to see how many iterations of QACE are required for it to provide a reasonable estimate of the model. What is the convergence rate of QACE w.r.t the error in estimating the model?
- Clarity: A short description of the GLIB (the baseline) method would be good, as it is the only baseline that QACE is compared against.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Clarification: Is it correct to assume that QACE assumes that there is a unique model that defines a given SDMA? What happens if that assumption is not satisfied, or is it the case that SDMA will always have a unique model by design?
- What happens if the SDMA systems have some margin of error in providing responses to the queries? How would the error propagation be handled when generating distinguishing queries?
- In real systems, sometimes it is not feasible to ask several queries to a system. When the number of queries has a budget (say c number of queries/ or the cost of a query is c), how does it affect the accuracy of the learned model? Can further guarantees be provided on the learned model?
- See "Weaknesses" for additional clarifications.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have addressed the limitations of the presented method and in its current form, there is no potential negative societal impact of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review. We address your questions and concerns below:
**Q1)** No, QACE doesn’t assume that there is a single model that defines the given SDMA’s functionality. QACE can return a functionally equivalent model when there are multiple correct representations. E.g., if p(x) and q(x) are equisatisfiable in a domain, functionally equivalent models can be created by replacing p's and q's with each other. In such cases, QACE will return a model that is functionally equivalent (modulo such substitutions).
**Q2)** In the current setup, the SDMA responds with the policy or the capability sequence that it would execute in response to the objective in a given query. However, the system allows execution errors and stochastic environments. The system can model probabilistic effects, and as a result execution errors will be learned as additional effects for the respective capability in the query. Thank you for the question. We will clarify this in the final version.
**Q3)** (and **Weakness 2**): For our approach, QACE, the number of iterations of the algorithm is bounded by the number of interactions (steps) the system has with the agent. This is because each iteration has at least one interaction with the agent. The number of interactions (steps) is bounded by $\eta \times \alpha \times num-queries$. The plots for variational distance vs. the number of steps are available in the supplementary material (Fig. 3). Since this number is very small for QACE when compared to the baseline GLIB, we have also included a zoomed-in version of the plots and a plot of variational distance vs number of unique queries in the additional supplementary material (Sec. 5) submitted with this rebuttal response. We will merge this with the main supplementary in the final version.
Theoretically, for a fixed query budget, we can get an upper and lower bound on the number of $\langle l,p \rangle$ tuples that can be learned correctly. This is because if we stop the learning process in between, the current model M* will be correct in terms of $\langle l,p \rangle$ tuples that were already processed by QACE up to that point.
**Weakness 1)** Scalability Issue: We are providing a table below with the size of domains in terms of the number of predicates and capabilities. The number of queries are linear in terms of the number of predicates and capabilities (for loop in line 3 of Alg. 1). Note that the for loop in line 5 only contributes to a constant factor in the running time, as only three hypotheses are possible (lines 177-181). Thank you for the question. We will clarify this in the final version.
```
SDMA | |P| | No of Capabilities
Warehouse Robot | 8 | 4
Driver Agent | 4 | 2
First Responder Robot | 13 | 10
Elevator Control Agent | 12 | 10
Cafe Server Robot | 5 | 4
```
Additionally, our approach is highly scalable and the parameter $\eta$ is used only to learn the correct probabilities in the effects of capabilities.
**Weakness 3)** Thanks a lot for the suggestion. We did give a very high-level overview in lines 352-358. But we understand this might not be sufficient for audiences not familiar with GLIB. In addition to the text, we will add a short summary of GLIB in the revised version using extra pages.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses and clarifications. After going over the supplementary plots and responses, my queries have been sufficiently addressed (in particular, on the bounds of the learned model from Fig. 1 in the supplementary material). The work tackles an interesting problem and presents some takeaways that may be insightful for the community. I have raised my rating to a "Borderline Accept".
---
Reply to Comment 1.1.1:
Comment: Thank you for your review. We are glad that the updated plots in the new supplementary material and our response helped in addressing your queries. | Summary: This paper addresses the problem of creating a user-interpretable probabilistic model of the capabilities of a sequential decision-making (SDM) system (through only interacting with the system as a black-box (rather than inspecting its internal structure, e.g., reasoning dynamics). In particular, PPDDL is proposed to model the SDM system. An algorithm is proposed that (1) generates queries that seek to determine the location of predicates either preconditions or effects for capabilties in PPDDL descriptions and (2) data is collected from repeated interactions with the system (likely through simulation, but also possibly through real-world runs) to both determine the validity of PPDDL descriptions and their component probabilities. Theoretical results seek to establish that the resulting model is both "sound" and "complete" The algorithm is evaluated on five simulation environments, demonstrating convergence to a model with low variational distance from the true SDM.
Strengths: S1) The problem considered has many real-world applications, is indeed understudied, and is relevant for the planning and RL communities at NeuRIPs.
S2) The paper is relatively well-written and easy to follow.
S3) The use of maximal-likelihood estimation for determining probabilities in the PPDDL descriptions seems appropriate as proposed and in-line with other work on learning models of stochastic systems (e.g., model-based reinforcement learning).
S4) I appreciate that the authors considered both theoretical and empirical analysis of their approach.
S5) The empirical results considered a range of benchmarks to aid in evaluating the generalizability of the approach and its advantage over a baseline (GLIB).
Weaknesses: The primary weaknesses of this paper include:
W1) The algorithm proposed appears to perform a linear search through what is ultimately a combinatorial search space of potential PPDDL descriptions. The for loop on line 3 of Algorithm 1 loops over each condition location and predicate combination independently to determine whether that predicate appears as a precondition or effect for a given condition in some PPDDL description relevant to the system.
However, it isn't clear how the algorithm can discover PPDDL descriptions when two or more predicates appears as a precondition or effect only in AND combination with other predicates. In that case, looking at and individual <l, p> pair will not provide enough evidence to determine that the predicate p should be in location l for the relevant condition.
For example, say that condition C has a single precondition (and P1 P2) and a single effect (and P3 P4). Looking at <precondition P1> or <precondition P2> alone will not generate the necessary data to determine that either are part of a PPDDL description since they are only relevant together and have no measurable outcomes alone. Similar for <effect P3> and <effect P4>. While there are only 2|C| locations where a predicate can appear, its appearance occurs within 2^|P| (power set of P) possible combinations of predicates, and it is completely unclear how your algorithm would find all such combinations for even a single capability that requires an search space of exponential combinations of predicates. Especially using only \eta simulated trajectories of the system. And without all such combinations, it is unclear how your ultimate model is either sound or complete.
I can imagine there might be some submodular subset of SDM problems where the algorithm converges to the correct set of PPDDLs. But even in that subset of problems, a certain order of search in the for loop on line 3 (since M* is incrementally constructed) seems highly important for the soundness and correctness of the final model M constructed by the model.
Altogether, the combinatorial nature of the PPDDL descriptions also implies scalability concerns as the number of capabilities and especially predicates increases, but a large number of unique predicates (and hence a very large number of combinations) are likely to be needed for realistic systems of important real-world problems.
W2) The algorithm presented in Algorithm 1 is not any anytime, algorithm, so I'm not sure what it meant to increase the learning time. It has two for loops whose time complexity depend on the number of capabilities and predicates, and the number of simulation traces \eta was fixed in the experimental setup to 5.
W3) The uncertainty present in the benchmarks seems rather small, so they do not seem to highlight how well the approach handles non-deterministic environments (which is considered one of the main strengths when compared to the prior work in Section 7). Requiring only \eta = 5 simulation traces per query for good convergence implies that (1) it was rather simple to experience traces that demonstrate the existance (or absence) of a predicate as a precondition or effect (i.e., the environment is not very stochastic), and (2) the probabilities in the domain must all be close to multiples of 20% (since that's the best level of precision you can achieve with 5 traces in your data-driven Maximal Likelihood Estimation. In most Monte Carlo sampling of complex stochastic environments, many traces are required (and are only guaranteed to converge as \eta approaches infinity).
W4) I think this work is a good starting point to achieve it's goal of explaining SDM systems to users. It wasn't clear to me how well the approach factors in the user's environment, which will be necessary for end users (especially non-AI specialists) will interact with and trust the system. For example, I might receive a set of descriptions of the capabilities of a retail robot vacuum with this approach, but when I take it home to my environment that is different from the one where the PPDDL descriptions were created, the robot might behave very differently. Maybe the probabilities change as it has a more difficult time navigating around my furniture or along the type of rugs on my floor. Or maybe even different predicates would need to be added since there are confounding factors (e.g., the presence of different types of pets).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Q1) How does your search handle the exponential number of possible combinations of predicates that could exist as preconditions or effects?
Q2) In your experiments, what do you mean by learning time, and how did you test your solution with different amounts of learning time? My naive assumption would be that it took 4 hours to run the entire algorithm and you measured the quality of M* along the way, but that would imply that only looking at a few of the <l, p> pairs (the first few in the for loop on line 3) gave you a really accurate model, which doesn't make sense because they tell you little about the other <l, p> pairs not yet considered.
Q3) How do you interpret the low variational distance with only \eta = 5?
### Post-Rebuttal ###
I thank the authors' for their rebuttal and the ensuing conversation. They helped strengthen my understanding of the proposed method. I think that adding more detail to the body of the paper (instead of the supplement) would greatly strengthen the impact of the work since it appears there are a lot of necessary details that were not originally presented that affect the efficiency and effectiveness of the algorithm.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: There wasn't a lot of discussion of the limitations of the approach, but addressing many of the weaknesses (especially the combinatorial nature of the problem) would aid the reader in better understanding when the approach could be applicable vs. when improvements to the technique would needed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed feedback and questions. We address your questions and other concerns below:
**Q1)** The reviewer correctly points out that the search space of possible preconditions and effects is exponential. We also mentioned this in lines 40-42. Verma et al. [1] showed that preconditions (and effects) that are a conjunction of predicates can be learned in a linear number of queries (in terms of the number of predicates and capabilities) using query synthesis over abstract models (for deterministic settings). Our approach uses the same methodology and performs query synthesis over abstract models in non-deterministic settings.
This approach leads to fewer queries because the reasoning about the correctness of a precondition or effect is not done using models that are at the same level of abstraction as the ground truth model but instead using a high-level abstract model that has fewer predicates in the precondition and/or effect of some/all capability(ies).
E.g., consider a capability with a precondition $p1 \land p2$ as suggested by the reviewer. The automated query generation process will involve executing the capability successfully in some state $s$ by the policy. The SDMA can only execute the capability in $s$ if $p1 \land p2$ is true in $s$. As mentioned in lines 279-284, if $s$ doesn’t fulfill this criterion (i.e., the SDMA fails to execute the policy successfully) a new query is generated from a new initial state $s’$. Hence, this property of executing the capability in a state having $p1 \land p2$ is ensured. Now, when reasoning about p1, the policy can ask the agent to execute that capability in the state $s \setminus p1$ and if the SDMA fails to execute it then it means $p1$ is part of the precondition. Similarly, this can be done for p2 independently.
In the worst case, the search for a state $s$ where a query policy is executable will be exponential, but as the evaluations show, we can learn the correct model much faster. We also mention a way to overcome this in lines 282-284. Please note that even for methods like reinforcement learning, the worst-case upper bound is exponential in terms of the state space. We will include this discussion in the final version using the additional page permitted by NeurIPS.
**1.1:** About the method working only for submodular PPDDLs: As shown in the results, this method **does** work for PPDDLs that have conjunctive preconditions. E.g., in Fig.2 in the paper, we have the precondition (empty-arm) $\land$ (has-charge) $\land$ (robot-at ?location) $\land$ (at ?location ?item). This capability is from the cafe server robot.
The empirical evaluation showed that QACE (our approach) can learn such models much faster than the closest SOTA approach GLIB as shown in Fig. 5.
**Q2)** Your intuition is correct. For an experiment run, we run QACE as well as the baselines from scratch. For the plots, we took snapshot of the learned models every 60 seconds and computed the variational distance using a fixed test dataset. As you notice in the graph, the variational distance is very high initially, and it drops till the learning process of QACE ends (marked by a blue x on the plots). We do not need to run QACE beyond this point and this time is short for all the domains. On the other hand, GLIB doesn’t have a clear ending criterion. Hence we let it run for 4 hours and see that even with the extra time (and hence extra samples), it cannot learn a better model.
About getting an accurate model with few $\langle l,p \rangle$ pairs: This is not true. Since the plots are shown for a period of 4 hours, it would seem that QACE learns the model without using all $\langle l,p \rangle$ models. We do consider all $\langle l,p \rangle$ models and learn the final model in an efficient manner. Fig. 1 in the extended supplementary material (uploaded with the rebuttal response) clarifies this point using the zoomed-in portions for duration when QACE is running. This figure also shows that the model learned by QACE gets better with time as it processes more $\langle l,p \rangle$ pairs.
This also addresses a similar concern raised in W2. Please note that a larger $\eta$ might be needed for a more complex domain to learn the correct probabilities.
**Q3)** The hyperparameter $\eta$ is 5, but this does not mean that we execute that capability just 5 times. As mentioned in the paper, consider $\mathcal{P}$ to be the set of predicates, and $|\mathcal{P}|$ be the number of predicates. Now a capability $c$ can appear in policies for $\langle l, p \rangle$ pairs, such that location $l$ corresponds to a precondition or effect in $c$. So effectively, $c$ can appear in at least for $2 \times |\mathcal{P}|$ queries. So we will have at least $2 \times |\mathcal{P}| * \eta$ samples for each capability.
**W4)** This is precisely the motivation for our work, and is addressed directly by the presented method. In your scenario, the user would be able to use our system to discover the new model of the agent, with probabilities in this new environment. We agree that predicate discovery is also an open problem in this area and we plan to address it in future work by building upon the presented methods.
----------------------
*References:*
[1] Verma, P., Marpally, S. R., & Srivastava, S. Asking the Right Questions: Learning Interpretable Action Models Through Query Answering. AAAI 2021.
---
Rebuttal Comment 1.1:
Title: RE: Rebuttal by Authors
Comment: I thank the authors for their responses to my questions and overall review! I especially better understand your answer to my Q2, and relatedly I noticed that I missed the X in Figure 5 that indicates when QACE stopped running.
I'm still confused about Q1 -- where in your Algorithm 1 will you consider p1 AND p2 together? Line 3 is a loop over all <l, p> pairs, and p is an element of the set of possible predicates P, so wouldn't p be only a singleton and not a conjunction of elements of P?
And if discovering p1 AND p2 is dependent on testing in a particular state s where p1 AND p2 are required, how do you insure that QACE receives that state s as an input? Do you run QACE on every possible state?
---
Reply to Comment 1.1.1:
Comment: Thank you for going through our response.
The method does work iteratively, but the iterations are not independent of each other. Consider the case where QACE processes $\langle l,p \rangle$ = $\langle$ precondition of $c, p_1 \rangle$ in line 3. This iteration of the loop will involve creating a query with an initial state where $p_1 \land p_2$ is true (more on how we get this state later). Using this query, QACE will set $M^*$ to have $p_1$ as a precondition of capability $c$ in line 8. Next, when QACE considers, $\langle l,p \rangle$ = $\langle$ precondition of $c, p_2 \rangle$, the three hypotheses generated in line 4 using $M^*$ will have the precondition of $c$ as $p_1 \land p_2$ (in $h_T$), $p_1 \land \neg p_2$ (in $h_F$), and $p_1$ (in $h_I$). So essentially, QACE builds upon the already learned partial models in previous iterations.
**About getting the states where a capability is executable:** This refers to getting the state where $p_1 \land p_2$ is true in the example above. Your intuition is correct that, in the worst case, QACE will have to check all possible states that can be generated using the predicates. In practice, though, we use directed exploration (also mentioned in response to reviewer *k56H*) to avoid such cases. We start with the input state $s$ (we only use one state as input) and use the partially learned model $M^*$ to generate new states where $c$ might be executable. Empirically, this process is very fast, as evident from the results. If this approach does not work, it defaults to a randomized exploration to generate a state where $c$ is executable, which can lead to exploring all states in the worst case. | Rebuttal 1:
Rebuttal: We thank the reviewers for their detailed reviews and comments. We answer the questions posed by the reviewers separately. Please find them in the response below the reviews. We are also adding a supplementary page with two plots. One showing the zoomed in version of the plot for variational distance vs learning time for QACE (our approach) and GLIB (baseline). And the other one showing the variational distance vs number of queries for QACE.
Pdf: /pdf/74777addbc80ffbe6f42fb7bf2fceff60267f317.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a method for modeling the capabilities of black-box artificial intelligence systems, which can plan, act, and execute in a stochastic setting. Specifically, the proposed method introduces an active learning approach to interact with the black-box SDM system and learn a probabilistic model that describes its functionality. The paper presents theoretical analysis that guarantees the convergence of the learning process. Empirical evaluations on different intelligent agents and simulated scenarios demonstrate that the proposed method exhibits generalizability with limited data and effectively describes the capabilities of the agents.
Strengths: 1. The paper proposes a method for modeling the capabilities of black-box artificial intelligence systems. Overall, this paper is well-motivated and provides detailed explanations.
2. This paper describes how the active learning method effectively interacts with the black-box SDM system and introduces a probabilistic model that explains its functionality.
Weaknesses: 1. The empirical evaluation results of the paper demonstrate that the model can adapt well to training tasks with few samples. However, how does it perform in terms of generalization evaluation for new tasks?
2. Another major concern I have is the generality of the proposed method, especially when people want to apply it to more complex manipulation tasks. While it has been validated on several examples, I am unsure how these ideas can be extended. How can it be applied to more practical environments with skill-specific parameters, such as grasping angles and placement targets?
3. In practice, how is the intelligent system trained with data? How is this data collected on physical robots, and what are the challenges involved?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Regarding this paper, please refer to my "Weaknesses" for questions and comments. They mainly concern the limitations of the method, evaluation, and practical data.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper briefly mentions its limitations, but it would be beneficial to include a discussion on potential social implications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for questions and support. We address your comments on the weakness of the approach below:
**Weakness 1)** The test set is not the same as the training set. As mentioned in lines 332-334, we used a single problem as input. Additionally, QACE (our approach) generates queries and based on their responses we learn the correct model. The methods were tested on environments that were much larger in terms of objects as compared to the input problem. This shows the generalization of our approach for tasks that follow the same dynamics.
**Weakness 2)** Our approach is highly general for SDM agents. Our experiments on the SDMA with cafe server robot had complex grasping poses and angles. The predicates used to express the domain were at a high level of abstraction which abstracted this low level information and hence explains the high-level dynamics of the system. For vocabularies that can express the difficult concepts like different types of grasps, this approach will accommodate for those predicates and learn a model in terms of those predicates. This feature makes the models personalized for each kind of user.
**Weakness 3)** Our intelligent data gathering process uses active learning based query generation allowing the collection of training data in an automatic, handsfree fashion without requiring any human intervention. In our paper, this process was accomplished on benchmark SDM agents connected with simulators.
In the real-world, on physical robots, our system could connect to the robot via an interface or be deployed directly. It can issue commands (execute capability c) and observe the responses to those commands for collecting training data. The key challenges here would pertain to safe operation of the robot. Since the models of its capabilities are not known, running these commands directly on the robot could potentially lead it to perform an unsafe operation. This could be easily circumvented if the physical robot was inbuilt with rudimentary safety protocols. This is part of the future extension of the work that we mention in lines 422-424.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal and their work.
I read the rebuttal carefully. I've raised my rating to weak accept because some of my concerns have been addressed in the rebuttal, and the work offers some interesting ideas that should be shared with the community.
---
Reply to Comment 1.1.1:
Comment: Thank you for your review. We are happy to see that we have addressed your concerns. It would be great if you could update the score to reflect your comments. Thank you. | null | null | null | null | null | null |
Does Graph Distillation See Like Vision Dataset Counterpart? | Accept (poster) | Summary: This paper mainly focuses on the Laplacian Energy Distribution (LED) shift problem of graph dataset condensation.
Strengths: The pipeline figure is clear and straightforward.
The studied problem, how to condense a graph dataset, is relatively important.
Weaknesses: Compared to existing studies that widely investigated the gradient matching, it seems that the major contribution of this paper is considering the LED shift. Why LED and how can LED benefit? The authors need to provide more analyses and explanation about their motivation.
In the Introduction, the authors mention “We empirically find a positive correlation between LED shift and the performance in cross-architecture settings”. This can only be considered as a finding. The motivation for introducing LED is ambiguous, especially given the correlation does not mean causality.
The authors simply compare their method with Gcond and consider it as the SOTA. However, there are many works that have been proposed since Gcond. The authors should consider compare with them as well for a comprehensive evaluation of their method. For example, [1, 2, 3, 4, 5]
[1] Jin, Wei, et al. "Condensing graphs via one-step gradient matching." Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022.
[2] Yu, Ruonan, Songhua Liu, and Xinchao Wang. "Dataset distillation: A comprehensive review." arXiv preprint arXiv:2301.07014 (2023).
[3] Liu, Chuang, et al. "Comprehensive graph gradual pruning for sparse training in graph neural networks." IEEE Transactions on Neural Networks and Learning Systems (2023).
[4] Zheng, Xin, et al. "Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data." arXiv preprint arXiv:2306.02664 (2023).
[5] Yang, Shuo, et al. "Dataset pruning: Reducing training data by examining generalization influence." arXiv preprint arXiv:2205.09329 (2022).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: see above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: see above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the instructive questions. We make responses to the reviewer’s comments as follows.
## Q1: More analyses and explanations of motivation?
A1: Thanks for the question, we show the storyline of our paper as follows.
- The core question in our paper is: “Does graph distillation see like vision dataset counterpart?“. Previous works follow the vision dataset distillation methods, which may lead to the following limitations: 1) the original graph structure information is not well preserved after condensing (see Fig. 1(b)); 2) In vision dataset distillation, gradient matching may entangle the synthetic dataset and architecture[86], which may be compounded on graph data (lines 43-54).
- Then how to broadcast the original structure information into the condensed graph? In graph spectral theory [63, 1, 10, 62], the LED shift serves as a metric for measuring the differences between two graph structures. However, due to the significant size difference between the original and condensed graphs, directly calculating the LED shift is impractical. Inspired by the [63, 1], we analyze the LED shift between two graphs in the spectral view. We introduce the LED Shift Coefficient ($SC$) as a measure of shifts and empirically validate its consistency with performance. As optimizing $SC$ incurs substantial costs (see our response to Q4 of reviewer 5ivm), we further employ the Optimal Transport distance to approximate $SC$ during optimization. A detailed analysis is provided in lines 172 to 181.
- Our SGDD approach achieves state-of-the-art results on 9 datasets. For example, in the YelpChi, we maintain 98.6% test accuracy while reducing the graph scale by 1,000 times.
- Compared to previous methods, $SC$ is significantly reduced by SGDD. For example, 29.4% reduction on Reddit, 50.0% on YelpChi, 60.8% on Reddit, and 62.5% on Amazon (see Fig. 1(b, e), Fig. 5(b, e), and the response to Q4 of reviewer 5ivm).
In summary:
1. The capability of the LED to convert the graph structure into a frequency domain distribution is essential for calculating the structure distance, particularly in cases where there is a substantial size difference.
2. Following the discussion on [63, 1, 10, 62], the LED is closely associated with the generalization performance of GNNs, providing valuable insights into the role of graph structure in the condensation process.
**The correlation does not mean causality:** Thanks for the question. The correlation inspires us to design the SGDD method, which effectively broadcast the structural information from the original graph dataset to a condensed version. Our SGDD achieves SOTA in most cases, including new SOTA results on YelpChi and Amazon with improvements of 9.6% and 7.7%, respectively.
Empirical results support the consistency between LED shift and generalization performance (see Fig. 1(b, e), Fig. 5(b, e), and response to Q4 of reviewer 5ivm). Compared to previous methods, our SGDD significantly improves performance while reducing $SC$.
---
## Q2: Considering compare with more baselines?
A2: Thanks for the advice. We conduct experiments on 5 additional baselines.
We first clarify that (1) the **public date (5 Jun) of SFGC [4] is behind the deadline for NeurIPS submission (17 May).** Nevertheless, we would like to compare them in the revision. (2) the literature [2] is a survey on the computer vision area, we choose the representative method MTT [A] to compare with.
- Note and details:
- We calculate the average performance of each method and use the $\Delta$ to indicate the relative difference of the SGDD to the other method.
- Since CGP [3] has no public code, we reproduce it in the evaluation.
- The Dataset Pruning [5] requires a solver in linear programming, we use the CPLEX [B] as the backend optimizer.
We highlight the best-performing entries in **bold** and indicate the $\underline{\text{runner-ups}}$ with underlined values.
| | Citeseer($r$=1.8%) | Cora($r$=2.6%) | Ogbn-arxiv($r$=0.25%) | Flickr($r$=0.5%) | Reddit($r$=0.1%) | YelpChi($r$=0.1%) | Amazon($r$=0.2%) | Avg. / $\Delta$ (%) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Whole Dataset | 71.7±0.1 | 81.2±0.2 | 71.4±0.1 | 47.2±0.1 | 93.9±0.0 | 61.1±1.8 | 89.5±0.9 | 73.7 / - |
| MTT [2, A] | 68.4±0.8 | 78.3±0.8 | 59.9±0.6 | 44.8±0.8 | 86.7±0.1 | 44.1±0.8 | 76.7±1.1 | 65.5 / **-5.8** |
| CGP [3] | 67.4±1.3 | 77.6±0.8 | 61.1±0.7 | 45.4±0.4 | 86.6±1.8 | 49.6±0.2 | 77.4±1.8 | 66.4 / **-4.9** |
| Dataset Pruning [5] | 66.8±1.8 | 74.6±1.9 | More than 3 days | More than 3 days | More than 3 days | 46.8±1.1 | 68.6±0.4 | 64.2 / - |
| DosCond [1] | 69.8±0.3 | 79.4±0.7 | 58.8±1.1 | 46.3±0.4 | 88.6±0.5 | 47.6±0.3 | 77.4±0.8 | 66.8 / **-4.5** |
| SFGC [4] | **72.4±0.4** | **81.7±0.5** | $\underline{66.1±0.4}$ | $\underline{47.0±0.1}$ | $\underline{90.0±0.3}$ | $\underline{44.7±0.1}$ | $\underline{77.5±0.7}$ | 68.4 / **-2.9** |
| SGDD | $\underline{70.3±0.8}$ | $\underline{80.6±0.8}$ | **67.2±2.8** | **47.1±0.3** | **91.8±1.9** | **58.1±2.3** | **84.8±1.7** | 71.4 |
Conclusions:
- **Compared to Dataset Condensation methods [2, 1, 4], our method demonstrates an average improvement across all datasets. Notably, SGDD exhibits higher improvements on large datasets, highlighting the importance of broadcasting the structural information in condensing larger datasets.**
- **Compared to dataset pruning methods [3, 5], the linear programming-based approach [5] fails in efficiency, and the mask-based method [3] may hinder condensed graph connectivity, resulting in poor performance.**
Due to the limited time, we primarily focus on the node classification and anomaly detection tasks, we would like to show more experiments if the reviewer is interested.
---
[A] George Cazenavett. et al. "Dataset Distillation by Matching Training Trajectories." CVPR 2022.
[B] Cplex, I. I. User’s Manual for CPLEX. *International Business Machines Corporation*, 2009.
---
Rebuttal Comment 1.1:
Title: Further Discussions with Reviewer FmUF
Comment: Dear reviewer FmUF:
Thanks for taking the time to review our paper. We detailed our motivation in the response and provide a more comprehensive study with additional 5 works. Hope our rebuttal has addressed your concerns.
As the discussion period is nearing its end, please feel free to let us know if you have any other concerns. Thanks!
---
Reply to Comment 1.1.1:
Title: Looking forward to your reply!
Comment: Dear Reviewer FmUF,
As we draw closer to the rebuttal deadline, I would like to inquire if you have any additional questions or concerns about our work. We greatly value your feedback. Thank you!
Best,
Authors from submission 1279 | Summary: This paper proposes a novel method called SGDD for condensing large-scale graph datasets while preserving the original structure information. The proposed method uses a graphon approximation method to broadcast the original structure as supervision for generating the condensed graph structure and optimizes it using an optimal transport method. The proposed SGDD achieves state-of-the-art results on various datasets and tasks.
Strengths: (1) This paper uncovers the issue of Laplacian energy distribution shift during the condensation of graph datasets and shows that previous graph condensation methods that overlook the original structure information can lead to poor performance in cross-architecture generalization and specific tasks. This paper introduces a metric SC to quantify the Laplacian energy distribution issue.
(2) A new condensing paradigm in graph condensation. This paper leverages the graphon theory to guarantee the structure information consistency between the original and generated graph, which firstly involves the generated learning fashion in graph condensation. Consequently, the generated structure may capture the native properties of the original structure in comparison to current methods, which benefits the downstream tasks.
(3) Extensive experimental results demonstrate the effectiveness of the proposed method. This work includes extensive experiments on three node-level tasks as well as multiple ablation studies. All of the experiments demonstrate superior results. The most appealing result is that this method performs well in a cross-architecture setting, significantly improving the effectiveness of graph condensation.
(4) Potential Impact. When the data and model go bigger and bigger, graph condensation may have more applications and research meanings. As the applications and challenges related to graph condensation remain unidentified, this paper may set the stage for future exploration of this problem and inspire the communities to further investigate and may benefit the areas like pruning, LLM with graphs, and NAS.
Weaknesses: (1) The motivation for using JS divergence in the calculation of SC is unclear. There are serval ways of calculating the distance of two distributions.
(2) In Figure.2, the author uses the BWGNN without specifying the difference between BWGNN and other backbones.
(3) There is another related method called DosCond[1] which is mentioned in the introduction but not involved in the baselines. The reason why not choosing this method for comparison should be specified.
[1] Jin, Wei, et al. “Condensing Graphs via One-Step Gradient Matching.” Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) Please detailed the motivation in weakness.1 and weakness.2.
(2) Consider adding the baseline mentioned in weakness (3) or specify the reason why do not compare with this method.
(3) There is a high variance in the results of SGDD in Table 1. Would changing the backbone potentially improve the overall performance as shown in Table 1?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: (1) In the real world, graphs may contain multiple types of edges, resulting in heterogeneity. However, the current learning methods may fail to capture this heterogeneity, leading to information loss.
(2) Lack of discussion on different backbones.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the detailed comments and insightful questions. We make responses to the reviewer’s comments as follows.
## Q1: The motivation for using JS divergence in the calculation of $SC$ is unclear?
A1: Thanks for the comment. We list three commonly used distances in the table below and compared their characteristics.
| Distance Type | Characteristics |
| - | - |
| Wasserstein distance | Wasserstein distance is often used in the situation where the two distributions are non-overlapping |
| Kullback-Leibler divergence (KL) | KL-divergence is asymmetrical and can’t satisfy the requirement in comparing |
| Jensen-Shannon divergence (JS) | It offers an intuitive perspective on the similarity or dissimilarity of two distributions in terms of their shapes. |
Here we list the results on two graphs $G_a$ and $G_b$ condensed from the four datasets ($G_o$). We report the different distance and the average accuracy of them.
Note we use the $\uparrow$, $\downarrow$, and - to denote the improve, decrease, and no change respectively.
| | Cora, | $r$=2.6% | Citeseer, | $r$=1.8% | Flickr, | $r$=0.5% | Reddit, | $r$=0.05% |
| - | - | - | - | - | - | - | - | - |
| | $G_a$ vs $G_o$ | $G_b$ vs $G_o$ | $G_a$ vs $G_o$ | $G_b$ vs $G_o$ | $G_a$ vs $G_o$ | $G_b$ vs $G_o$ | $G_a$ vs $G_o$ | $G_b$ vs $G_o$ |
| KL | 0.15 | 0.15 - | 0.16 | 0.32$\uparrow$ | 0.18 | 0.19$\uparrow$ | 0.21 | 0.22$\uparrow$ |
| Wasserstein distance | 0.24 | 0.21 $\downarrow$ | 0.14 | 0.18$\uparrow$ | 0.26 | 0.17$\downarrow$ | 0.32 | 0.18$\downarrow$ |
| JS | 0.11 | 0.24 $\uparrow$ | 0.27 | 0.36$\uparrow$ | 0.16 | 0.19$\uparrow$ | 0.33 | 0.42$\uparrow$ |
| Avg. Acc. (%) | 74.1 | 68.2 $\downarrow$ | 68.2 | 64.2$\downarrow$ | 45.6 | 33.1$\downarrow$ | 90.6 | 88.7$\downarrow$ |
Conclusion(s):
- **While KL divergence and Wasserstein distance could theoretically be used, we observe that their consistency was inferior to that of JS divergence in this specific case. As a result, we chose to utilize JS divergence in our experiments.**
## Q2: The difference between BWGNN and other backbones?
In Figure.2, the author uses the BWGNN without specifying the difference between BWGNN and other backbones
A2: Thanks for the question.
**Motivation:** We select the BWGNN [63] in order to examine the impact of the frequency response of each backbone on the Laplacian energy distribution (LED) of the condensed graph, it is crucial to eliminate the influence of differences among the backbones themselves. Therefore, we choose the frequency-adaptive BWGNN.
**Brief introduction to BWGNN:** the BWGNN mainly uses the Beta wavelet, which has the probability density function of Beta distribution admits:
$$
\beta_{p, q}(w)= \begin{cases}\frac{1}{B(p+1, q+1)} w^p(1-w)^q, & \text { if } w \in[0,1] \\ 0, & \text { otherwise }\end{cases}
$$
Where $p, q \in \mathbb{R}^{+}$ and $B(p+1, q+1) = p!q!/(p+q+1)!$ is a constant, the BWGNN adopt $B^*_{(p, q)} (w) = \frac{1}{2}B_{p,q} (\frac{w}{2})$ to cover the complete spectral range of $L$. Thus we can use a different parameter setting to change the frequency response of BWGNN (see 3.2 Beta Wavelet on Graph on [63]).
**Difference to other backbones:** Following [1], we list the difference between BWGNN and the other backbones as follows.
| Backbones | Low-pass support | high-pass support | Controllable spectral range |
| - | - | - | - |
| SGC | $\checkmark$ | $\times$ | $\times$ |
| GCN | $\checkmark$ | $\times$ | $\times$ |
| ChebNet | $\checkmark$ | $\checkmark$ | $\times$ |
| GAT | $\checkmark$ | $\checkmark$ | $\times$ |
| BWGNN | $\checkmark$ | $\checkmark$ | $\checkmark$ |
Conclusion(s):
- **Compared to other backbones, BWGNN not only provides support for all-pass filtering but also offers controllability, which is convenient for applying it to 9 datasets.**
## Q3: The experiment results about DosCond?
A3: Thanks for the suggestion. We conduct experiments on node classification and anomaly detection tasks on DosCond [29] and show the results in the following table.
- **Note:** We use $\Delta$ to indicate the performance improvement from method DosCond to SGDD.
| | Citeseer($r$=1.8%) | Cora($r$=2.6%) | Ogbn-arxiv($r$=0.25%) | Flickr($r$=0.5%) | Reddit($r$=0.1%) | YelpChi($r$=0.1%) | Amazon($r$=0.2%) |
| - | - | - | - | - | - | - | - |
| Whole Dataset | 71.7±0.1 | 81.2±0.2 | 71.4±0.1 | 47.2±0.1 | 93.9±0.0 | 61.1±1.8 | 89.5±0.9 |
| DosCond [3] | 69.8±1.8 | 79.4±1.1 | 58.8±1.3 | 46.3±0.4 | 88.6±0.5 | 47.6±0.3 | 77.4±0.8 |
| SGDD | 70.3±0.8 | 80.6±0.8 | 67.2±2.8 | 47.1±0.3 | 91.8±1.9 | 58.1±2.3 | 84.8±1.7 |
| $\Delta$(%) | 0.5 $\uparrow$ | 1.2 $\uparrow$ | 8.4$\uparrow$ | 0.8$\uparrow$ | 3.2$\uparrow$ | 10.5$\uparrow$ | 7.3$\uparrow$ |
Conclusion(s):
- **Our method SGDD demonstrates superior performance compared to DosCond, particularly on the YelpChi and Amazon datasets.**
## Q4: Would changing the backbone potentially improve the overall performance?
A4: Thanks for the question. We conduct the experiments on three datasets: Reddit, Ogbn-arxiv, and Flickr with 6 different backbones.
- **Note:** We use the $\uparrow$ and $\downarrow$ to indicate the increase and the decrease to the default GCN respectively.
| Acc. (%) | GCN | APPNP | Cheby | SAGE | SGC | GAT |
| - | - | - | - | - | - | - |
| Reddit, $r$=0.05% | 91.8 | 91.4/0.4$\downarrow$ | 92.1/0.3$\uparrow$ | 90.6/1.2$\downarrow$ | 91.9/0.1$\uparrow$ | 92.0/0.2$\uparrow$ |
| Ogbn-arxiv, $r$=0.25% | 67.2 | 66.4/0.8$\downarrow$ | 66.8/0.4$\downarrow$ | 66.5/0.7$\downarrow$ | 66.4/0.8$\downarrow$ | 66.4/0.8$\downarrow$ |
| Flickr, $r$=0.50% | 47.1 | 47.2/0.1$\uparrow$ | 46.2/0.9$\downarrow$ | 46.8/0.3$\downarrow$ | 47.0/0.1$\downarrow$ | 47.2/0.1$\uparrow$ |
Conclusion(s):
- **The range of upward and downward performance is not particularly large in our case, as it falls within a specific range between -0.8 and 0.3.**
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for their clear rebuttals and efforts, which have addressed my concerns and problems. Especially the extensive experiments on various datasets validate their effectiveness and superiority compared to baselines. In summary, this paper proposes a novel method called SGDD for condensing large-scale graph datasets while preserving the original structure information, and achieves state-of-the-art results. It is worth noting that data condensation has not been extensively explored in the field of graph learning, and this work addresses the problem from a novel perspective of graph structure, which will play a leading role in the field of work. I suggest this paper to be accepted.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer UGAQ,
We would like to express our sincere gratitude to the reviewer UGAQ for endorsing our work and providing constructive suggestions.
Yes, we find that graph dataset distillation is largely different from vision dataset distillation, so we design our method.
Thanks again for the time and effort in reviewing our work. | Summary: The paper proposes a novel approach for graph dataset distillation called Structure-broadcasting Graph Dataset Distillation (SGDD). The authors explicitly consider the impact of the original structure information on graph condensation and demonstrate that their approach achieves state-of-the-art results on 9 datasets, showing superior performance in cross-architecture settings and specific tasks. Overall, the paper presents a significant contribution to the field of graph dataset distillation.
Strengths: - The paper proposes a novel approach for graph dataset distillation that explicitly considers the impact of the original structure information on graph condensation. This is a significant departure from existing methods that primarily focus on optimizing the feature matrices of condensed graphs while overlooking the structure information.
- The authors introduce the concept of Substantial Laplacian Energy Distribution (LED) shifts and demonstrate that previous works suffer from such shifts, leading to poor performance in cross-architecture generalization and specific tasks. The authors propose SGDD as a solution to this problem, which is a novel and original contribution.
- The authors provide a thorough analysis of the proposed approach, including theoretical analysis and empirical evaluation on 9 datasets. The results demonstrate that SGDD consistently outperforms existing state-of-the-art methods, indicating the high quality of the proposed approach.
Weaknesses: - [Actual Computation Savings] The authors distill the graph dataset. However, the authors only provide the distilled dataset's size ratio to original dataset size. It would be helpful to provide the actual computation savings in terms of time and memory usage.
- [The Cost during Graph Distillation] The cost of graph distillation for condensed small graph dataset is recommended to be provided. Since the used benchmark graph datasets are less computationally expensive than datasets in computer vision and natural language processing, the cost of graph distillation period will reflect whether the dataset is necessary for practical use.
- [Comparison with Other Methods] The paper provides a comparison of the proposed method with other methods in terms of performance and size ratio (e.g., Table 1). However, it would be beneficial to also compare the cost of graph distillation for the proposed method with other methods. This information would provide a more comprehensive understanding of the practicality of the proposed approach compared to other methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions: Please see weaknesses. I would like to update my evaluation after the discussion.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate reviewer 9hTZ’s constructive feedback and are glad that the reviewer finds our work novel. We answer the questions one by one as follows. Hope it can address the reviewer’s concern.
### Q1: [Actual Computation Savings]
A1: Thanks for the suggestion. We conduct the experiments on 5 datasets and show results as follows.
**Storage Saving:** We present the basic statics of datasets and the storage saving in the following table.
| | Citeseer, | $r$=0.9% | Cora, | $r$=1.3% | Ogbn-arxiv, | $r$=0.5% | Flickr, | $r$=0.1% | Reddit, | $r$=0.1% |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | Whole | SGDD | Whole | SGDD | Whole | SGDD | Whole | SGDD | Whole | SGDD |
| Accuracy (%) | 70.7 | 69.5 | 81.5 | 79.6 | 71.4 | 65.3 | 47.1 | 47.1 | 93.9 | 90.5 |
| #Nodes | 3,327 | 60 | 2,708 | 70 | 169,343 | 454 | 44,625 | 44 | 232,965 | 153 |
| #Edges | 4,732 | 1,434 | 5,429 | 2,131 | 1,166,243 | 8,681 | 218,140 | 331 | 57,307,946 | 3,427 |
| Storage (MB) | 47.1 | 0.8 **$\downarrow$ 58.8X** | 14.9 | 0.4 **$\downarrow$ 37.2X** | 100.4 | 1.0 **$\downarrow$ 100.4X** | 86.8 | 0.1 **$\downarrow$ 868X** | 435.5 | 0.7 **$\downarrow$ 622X** |
Conclusion(s):
- **We achieved a storage saving of 868X in Flickr, 622X in Reddit, 100.4X in Ogbn-arxiv, 58.8X in Citeseer, and 37.2X in Cora, respectively.**
**Computing Saving:** We calculate the computing saving in the following table. All results are obtained with the repetition of 5 runs (X denotes times).
| | Whole Training Time(min) | Condensed Training Time(min) | Acceleration Rate | Whole Training Memory (GB) | Condensed Training Memory(GB) | Compression Rate |
| --- | --- | --- | --- | --- | --- | --- |
| Cora | 18.6 ± 2.1 | 0.4 ± 0.1 | **$\downarrow$46.5X** | 3.2 ± 0.8 | 0.8 ± 0.1 | **$\downarrow$4.0X** |
| Citeseer | 32.7 ± 5.8 | 0.8 ± 0.2 | **$\downarrow$40.8X** | 3.1 ± 0.6 | 0.8 ± 0.1 | **$\downarrow$3.9X** |
| Reddit | 56.8 ± 10.1 | 1.1 ± 0.1 | **$\downarrow$51.6X** | 22.3 ± 1.5 | 1.4 ± 0.1 | **$\downarrow$15.9X** |
| Flickr | 23.4 ± 4.2 | 1.1 ± 0.3 | **$\downarrow$23.0X** | 8.4 ± 0.8 | 1.1 ± 0.2 | **$\downarrow$7.6X** |
| Ogbn-arxiv | 48.7 ± 12.1 | 1.2 ± 0.3 | **$\downarrow$40.5X** | 17.4 ± 1.8 | 1.2 ± 0.1 | **$\downarrow$14.5X** |
Conclusion(s):
- **Our method achieves a speedup of at least 23.0X and a compression ratio of at least 3.9X.**
### Q2: [The Cost during Graph Distillation]
A2: Thanks for the suggestion. We compared the costs of the graph distillation process and the costs of training on the whole graph. The results are shown in the following table.
We conduct experiments on 5 datasets and report the average computing time and memory. All experiments are repeated 5 times.
| | Whole Training Time(min) | Distillation Time(min) | Time Cost Rate | Whole Training Memory (GB) | Distillation Memory(GB) | Computing Cost Rate |
| --- | --- | --- | --- | --- | --- | --- |
| Cora | 18.6 ± 2.1 | 46.1 ± 4.8 | **$\uparrow$2.4X** | 3.2 ± 0.8 | 3.8 ± 0.6 | **$\uparrow$1.1X** |
| Citeseer | 32.7 ± 5.8 | 53.8 ± 3.6 | **$\uparrow$1.6X** | 3.1 ± 0.6 | 4.1 ± 0.7 | **$\uparrow$1.3X** |
| Reddit | 56.8 ± 10.1 | 120.8 ± 20.6 | **$\uparrow$2.1X** | 22.3 ± 1.5 | 28.6 ± 1.6 | **$\uparrow$1.3X** |
| Flickr | 23.4 ± 4.2 | 67.3 ± 10.7 | **$\uparrow$2.8X** | 8.4 ± 0.8 | 11.7 ± 0.4 | **$\uparrow$1.4X** |
| Ogbn-arxiv | 48.7 ± 12.1 | 140.3 ± 3.6 | **$\uparrow$2.8X** | 17.4 ± 1.8 | 22.7 ± 0.7 | **$\uparrow$1.3X** |
Conclusion(s):
- **Despite the cost of our method being 1.6X-2.8X higher, considering the acceleration rate (23-51X) and compression rate (3.9-15.9X) when condensation is finished, it remains highly valuable.**
### Q3: [Comparison with Other Methods]
A3: Thanks for the suggestion. We compare our method SGDD and the other 6 baselines on three datasets (Ogbn-arxiv, Reddit, and Flickr), the average results (5 runs) are shown in the following table.
- **Note:** We use the **Coarsening*** method as the base method to compare and calculate the time cost rate as well as the performance comparison.
| | Random | Herding | K-Center | Coarsening* | GDC | GCond | SGDD | Whole |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Avg. Time (min) | - | 10.2 ± 1.2 | 24 ± 1.2 | 61.7 ± 4.2 | 184.3 ± 10.4 | 160.2 ± 13.8 | 109.4 ± 10.2 | 42.7 ± 5.2 |
| **Time Cost Rate** | - | **0.1X** | **0.3X** | **1.0X** | **3.0X** | **2.6X** | **1.7X** | - |
| Avg. Computing (GB) | - | - | - | 7.4± 0.8 | 14.3 ± 1.2 | 14.8 ± 1.1 | 16.0 ± 1.4 | 11.5 ± 2.1 |
| **Computing Cost Rate** | - | - | - | **1.0X** | **1.9X** | **2.0X** | **2.1X** | - |
| Avg. Performance | 48.1 ± 2.1 | 54.6 ± 1.3 | 51 ± 0.7 | 56.6 ± 1.4 | 65.1 ± 0.8 | 66.6 ± 1.2 | 68.6 ± 0.7 | 73.1 ± 1.8 |
| **Performance Comparison (%)** | **-8.5** | **-2.0** | **-5.6** | **0.0** | **+8.5** | **+10.0** | **+12.0** | - |
Conclusions:
- **Compare with heuristic methods: Heuristic methods have lower costs but much worse performance (-8.5% to -2.0%), indicating their limitations in condensation tasks.**
- **Compare with other condensation methods: SGDD only introduces a slightly higher computational cost than GDC and GCond (2.1X vs 1.9X and 2.0X), but achieves faster convergence (1.7X vs 2.6X and 3.0X), which can be attributed to the benefits of the better-learned structure. Meanwhile, SGDD achieves higher results faster than GDC and GCond.**
We hope the above response could address the concerns and would like to show more experiments in the discussion period if the reviewer is interested.
---
Rebuttal Comment 1.1:
Title: Further Discussions with Reviewer 9hTZ
Comment: Dear Reviewer 9hTZ:
Thank you so much again for your time and efforts in assessing our paper. Hope our additional experiments on actual saving have addressed your concerns. We are happy to discuss with you further if you have other concerns. Thanks for helping improve our paper! | Summary: The paper investigates the effects of structural information in graph condensation methods. The authors claim that by maintaining the original structure during condensation using a newly formulated method called Structure-broadcasting Graph Dataset Distillation (SGDD), they are able to achieve more refined results on 9 data sets thereby significantly reducing Laplacian Energy Distribution (LED) shift.
Strengths: This paper addresses an important question regarding the impact of structural information. The authors conduct a comprehensive analysis from the spectral domain and empirically identify significant shifts in Laplacian Energy Distribution (LED), which ultimately result in poor performance in cross-architecture generalization and specific tasks.
The proposed method demonstrates effectiveness in the experiments, and empirical analysis confirms the efficacy and necessity of the proposed designs.
Overall, the presentation of the paper is also commendable
Weaknesses: large datasets in the Open Graph Benchmark (OGB) is more effective in demonstrating the efficacy of these methods.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I don't have question regarding this paper.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Need experiment for large datasets
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful comment. We make responses as follows.
**Q1:** Need experiment for large datasets.
A1: Thanks for the reviewer’s suggestion.
**Dataset Statics:**
- As illustrated in the table below, there are four datasets that are significantly larger than Ogbn-arxiv.
| Datasets | #Nodes | #Edges | #Classes | Metric |
| --- | --- | --- | --- | --- |
| Ogbn-arxiv | 169,343 | 1,166,243 | 40 | Accuracy |
| Ogbn-mag | 1,939,743 | 21,111,007 | 349 | Accuracy |
| Ogbn-proteins | 132,534 | 3,561,252 | 112 (Multi-label) | ROC-AUC |
| Ogbn-products | 2,449,029 | 61,859,140 | 47 | Accuracy |
| Ogbn-papers100M | 111,059,956 | 1,615,685,872 | 172 | Accuracy |
We conduct experiments on the Ogbn-mag datasets and show the details as follows.
**Results on Ogbn-mag:**
Note: we report the results on SGDD and the other 3 baselines on 7 different condensation ratios ($r$). The accuracy on the whole dataset is 30.4 [A].
| Condensation Ratio($r$) | Random | K-Center | GCond | SGDD |
| --- | --- | --- | --- | --- |
| 0.0001% | 0.9 | 5.7 | 15.3 | 18.1 |
| 0.0002% | 1.1 | 6.8 | 15.4 | 18.3 |
| 0.0003% | 1.1 | 6.9 | 15.4 | 18.4 |
| 0.0004% | 1.4 | 7.1 | 15.3 | 18.7 |
| 0.0005% | 1.5 | 6.4 | 15.4 | 18.7 |
| 0.001% | 1.4 | 8.7 | 15.4 | 18.8 |
| 0.002% | 1.5 | 10.4 | 15.1 | 18.8 |
Conclusion(s):
- **On the large-scale dataset Ogbn-mag, our method achieves non-trivial improvements compared to other baselines.**
Due to computational resource limitations, we are currently running experiments on additional datasets. We would like to provide it in the discussion period if the reviewer is interested.
---
[A] Hu et al. Open Graph Benchmark: Datasets for Machine Learning on Graphs. NeurIPS 2021.
---
Rebuttal Comment 1.1:
Title: Further experiments on Ogbn-products
Comment: Dear reviewer ZZ8V:
Thanks for your patience, we finished our experiments on Ogbn-products, and show the table as follows. We will add these results in the revision.
**Results on Ogbn-products:**
Note: we report the results of SGDD and the other 3 baselines on 7 different condensation ratios($r$). We use the $\Delta$ to denote the improvements for our proposed SGDD to K-Center.
| Condensation Ratio ($r$) | Random | K-Center | GCond [B] | SGDD | $\Delta$ (%) |
| --- | --- | --- | --- | --- | --- |
| 0.0001% | 18.8 | 32.6 | 36.5 | 36.8 | $\uparrow$4.1 |
| 0.0002% | 18.4 | 34.7 | 36.4 | 36.8 | $\uparrow$2.0 |
| 0.0003% | 21.5 | 35.8 | 36.4 | 38.8 | $\uparrow$3.0 |
| 0.0004% | 23.8 | 35.6 | 36.9 | 38.6 | $\uparrow$3.0 |
| 0.0005% | 25.4 | 35.7 | 37.6 | 38.8 | $\uparrow$3.1 |
| 0.001% | 34.8 | 35.4 | 38.2 | 40.1 | $\uparrow$4.7 |
| 0.002% | 35.2 | 36.2 | 38.8 | 40.3 | $\uparrow$4.0 |
Conclusion(s):
- **Our proposed SGDD consistently outperforms other baselines across all condensation ratios with no meticulous tuning. Notably, it improves accuracy by 4.7% over K-Center at the 0.001% ratio, and by 1.8% and 5.3% over GCond [B] and Random, respectively.**
We eagerly await your feedback to know if our experiments have adequately addressed your concerns. Please feel free if you have other questions. Thanks for your constructive suggestion again!
[B] Wei Jin, et.al. Graph condensation for graph neural networks. ICLR 2022.
---
Rebuttal Comment 1.2:
Comment: I would like to thank the authors for the detailed response. It helps clarify the work. Regrading the scores, it remains valid to reflect the quality of the work.
---
Reply to Comment 1.2.1:
Comment: Dear Reviewer ZZ8V, we are truly grateful for your acknowledgment regarding our additional evaluation. We would like to express our sincere thanks once again for the reviewer’s work on reviewing our work! | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a new graph condensation framework called SGDD. The authors argue that existing methods overlook the structure information of the original graph during the condensation. And thus they propose to 1) use the Laplacian Energy Distribution (LED) shift to indicate the generalization performance of the condensed graph, and 2) minimize the LED shift (they claim it’s equivalent to the OT distance between the original and the condensed graphs) to preserve the structure information. Experiments on different datasets over node classification, anomaly detection and link prediction tasks show the effectiveness of the proposed method.
Strengths: - The motivation of preserving the structure information is clear and reasonable.
- The experiments seem relatively sufficient and the results are good.
Weaknesses: My major concern is the soundness of this paper:
- Some claims seem to be problematic. For example, in line 176-179, the authors claim that ‘minimizing the LED shift is equivalent to minimizing the distance of Laplacian pseudo-inverse matrices [65]’ and ‘Following the previous work [78], minimizing such distance can be further approximated to optimizing a free parameter $P$ in Eq(6)’. However, the equivalence in [65] only applies to the objective using a specific GW distance and the approximation in [78] only applies to another OT distance based objective. So the two claims cannot hold simultaneously. What’s worse, the authors do not describe which OT distance is used in their original objective function. Then how could we conclude that ‘minimizing the upper bound of Eq. (7) is equal to optimizing the $L_{structure}$ on Eq. (5)’ in line 189?
- The writing is not very easy to follow. The mixed use of ‘LED shift’, ‘LED shift coefficient’, ‘SC’ is very confusing (e.g., in Def2 and the caption of Fig2). And it’s not clear what’s the role of the proposed LED shift coefficient. This coefficient is neither used to derive the objective function, nor adopted as a performance metric in the tables for comparison.
=============================
Post-rebuttal: The author responses have basically solved my concerns about the soundness.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please see the above part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the detailed comments and insightful questions. We make responses as follows.
## **Q1:** The conflict of assumptions from the references [65] and [78].
**A1:** Thanks for the comment. There are primarily two types of graph optimal transport distances: Gromov-Wasserstein (GW) and Graph Optimal Transport (GOT). We present the differences between them as follows.
| Optimal Transports | Core Technologies | Famous Methods |
| - | - | - |
| GW | (1) GW is defined using a customized distance function that evaluates pairs of vertices in a graph [65]. (2) It quantifies the major alterations to the graph that significantly impact the distance function [A, B]. |FGW [65]、S-GWL[A]、SGW [B]|
|GOT | (1) GOT is defined based on the graph signal [A]. (2) It primarily captures changes to the graph that significantly affect the eigenvectors of the Laplacian with small eigenvalues [78]. |FGOT [C]、COPT [78]、GOT [D]|
FGW [65] belongs to the GW methods, while COPT [78] is a GOT-related method. Our proposed SGDD also relies on the graph signal $X$ of vertices and utilizes the Laplacian pseudo-inverse operation like [D], so our SGDD is a kind of GOT-related method. **After we checked our source code of Latex, there was a mistake in the reference GW-based method [65]. We update Line 175 to 177 as follows:**
***Optimal transport distance.** To address the above issue, we utilize optimal transport (OT) [78, D] to efficiently optimize the shift of LEDs. Specifically, minimizing the LED shift is equivalent to minimizing the distance of Laplacian pseudo-inverse matrices[D].*
**We will carefully check the content related to this issue in revision.**
---
[A] Hongteng Xu et. al. Scalable Gromov-Wasserstein Learning for Graph Partitioning and Matching. NeurIPS 2019.
[B] Titouan et. al. Sliced Gromove-Wasserstein. NeurIPS 2019.
[C] Maretic et. al. FGOT: Graph Distances based on Filters and Optimal Transport. AAAI 2021.
[D] Petric Maretic et. al. GOT: an optimal transport framework for graph comparison. NeurIPS 2019.
## **Q2**: Why minimizing Eq. (7) is equal to optimizing the $L_{structure}$ on Eq. (5)?
A2: Thanks for the question. First, considering the typically large different shapes between the original $\mathrm{A}$ and the condensed $\mathrm{A’}(N' \ll N)$ (line 160), the $A’$ here is the “graphon” $W_{\mathbf{A}}'$ with $N'$ nodes as an approximation to the oracle graphon $W_A$ [74], i.e., the abstract of the original graph. Thus Eq. (5) can be written as:
$$ \mathcal{L}_{\textbf{structure}} = \mathrm{Distance} (W_A', W_A).$$
In our settings, we replace the original graph $\mathrm{A}$ with $W_{\mathrm{A}}$ as $\mathrm{A}$ is an actual observed value of $W_A$ with $N$ nodes[74, 75]. Second, in Appendix B.2, the upper bound of Eq. (7) is the cut distance of two graphons:
$$ \delta_{\square}(W_{\mathrm{A}}', {W_\mathrm{A}}). $$
**As the cut distance $\delta_{\square}$ here can be regarded as a specific $\mathrm{Distance}$, the upper bound in Eq. (7) and** $L_{structure}$ in **Eq.(5) can be recognized equally.**
We will make it clear and check similar issues in the revision. Thanks again for the question.
## **Q3:** Mixed use of the concepts of LED shift, LED shift coefficient, and $SC$.
A3: Thanks for the comment. Assume that we have two graphs, we introduce the definitions of LED, LED shift, LED shift coefficient, and SC as follows,
**LED** is the distribution of the post-graph Fourier-transformed graph signal of the graph.
**LED shift** is the phenomenon that denotes the differences between these two graphs’ LED.
**LED shift coefficient** reflects the divergence between these two graphs. We provide the detailed formulation in Eq. 4 (lines 144 to 147).
$SC$ represents an abbreviation of the LED Shift Coefficient. We claim it in line 143 and Eq. 4.
We are sorry about the confusion and will make it clear in the revision.
## **Q4**: Where the $SC$ be used.
A4: Thanks for the question.
1. As mentioned in lines 172 to 174, $SC$ requires the eigenvalue decomposition process, which is extremely time-consuming $O(N^3)$, particularly when dealing with large graphs. We show the estimated time cost per run of $SC$ as follows.
| | Cora, $r$=1.3% | Citeseer, $r$=0.9% | Reddit, $r$=0.1% | Flickr, $r$=0.1% | Ogbn-arxiv, $r$=0.5% |
| - | - | - | - | - | - |
| $SC$ (h) | 18.0 | 21.0 | 130.0 | 65.0 | 78.0 |
| OT (h) | 0.8 | 0.9 | 2.1 | 1.1 | 1.8 |
1. Since we aim at alleviating the LED shift, the GOT-related OT distance serves as an approximate indicator ($O(N^2K)$($K \le N^{0.373}$) [78]), which is more practical. As shown in the above table, OT is 61.9X faster than $SC$ on Reddit, 59.0X faster on Flickr, 43.3X faster on Ogbn-arxiv, 22.5X faster on Cora, and 23.3X faster on Citeseer, respectively.
2. By leveraging the OT distance, we achieve consistent reduction on $SC$ (see Figure 1(b, e) and Figure 5(b, e)). We provide more results as follows.
Note: we use the $\downarrow$ to denote the reduction of $SC$ of SGDD to the GCond.
| Methods | Ogbn-arxiv($r$=0.25%) ($SC$/Acc.) | Reddit($r$=0.05%) ($SC$/Acc.)) | YelpChi ($r$=0.10%) ($SC$/F1-macro) | Amazon($r$=0.2%) ($SC$/F1-macro) |
| - | - | - | - | - |
| GCond | 0.34/63.2 | 0.46/89.6 | 0.46/49.6 | 0.48/78.1 |
| SGDD | 0.24/67.2 ↓29.4% |0.18/91.8 ↓51.1%|0.23/58.1 ↓50.0%|0.18/84.8 ↓62.5%|
Conclusion(s):
- **The** $SC$ **metric serves as a good indicator for measuring the relationship between LED shift and generalization performance.**
---
Rebuttal Comment 1.1:
Title: Further Discussions with Reviewer 5ivm
Comment: Dear reviewer 5ivm:
Thanks again for pointing out our key benefits of preserving the structure information. According to your constructive comments, we made the necessary adjustments on introducing the optimal transport and conducted time-consuming experiments on $SC$. Additionally, we have revised our writing of relevant concepts to enhance the accessibility for readers.
As the rebuttal period is about to close, may I know if our rebuttal addresses your concerns? Thank you for taking the time to review our work and provide your insightful comments.
---
Reply to Comment 1.1.1:
Title: Looking forward to your reply!
Comment: Dear reviewer 5ivm,
As we are nearing the rebuttal deadline, may I know if you have any other concerns regarding our work? Thanks!
Best,
Author of Submission 1279 | null | null | null | null | null | null |
Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution | Accept (poster) | Summary: This paper proposed an efficien training technique for Vision Transformers, called Patch n’ Pack. Specifically, it packs multiple images of various input resolutions into a single sequence as a batch exmaple. Furthermore, based on the modified architecture, the authors proposed NaViT. By combining Patch n’ Pack and NaViT, the authors conducted extensive experiments on JFT-4B datasets as well as a few downstream tasks. Overall, the method achieves great effciency for pretraining, and the model achieves better accuracy for differnet image resolutions at inference time.
Strengths: 1. Adapting general-purpose Transformers into different input image resolutions is a fundamental research problem. Therefore, the technique that proposed in this paper is important to the community. And it works pretty well on both pretraining and downstream tasks.
2. The experiments are comprehensive. The efficiency gain during pretraining is impressive.
3. This paper is easy to follow. The overall presentation is of great quality.
Weaknesses: 1. Packing examples into a single sequence during training is not new in the literature. However, it's technically new for ViT training.
2. Despite the performance, pretraining on JFT-4B is very expensive, making it difficult for the subsequent works to follow up and compare with. It would be better for the authors to include experiments of pretraining on ImageNet-1K.
3. It is not clear how the memory cost would be under the proposed Patch n’ Pack.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It would be better for authors to add additional experiments for pretraining on ImageNet-1K and compare with FlexViT. Please also include memory consumption report during pretraining.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for time spent reading the paper and your critique and comments; we're glad you appreciated the significance of the research problem, clarity of writing, and the strength of results.
We address here the mentioned weaknesses.
1. `Packing examples into a single sequence during training is not new in the literature. However, it's technically new for ViT training. `
We agree, and have cited prior work accordingly. While this is a substantial technical change for Vision modeling, we however believe we contribute more still, by showing it unifies a number of approaches recently developed in computer vision literature (variable aspect ratio and token dropping), and enables new approaches which could not be explored before due to the restrictive need for constant sequence length per image.
Alongside this, we have introduced variable resolution as a method for significantly speeding up training while producing more flexible models, and novel positional embedding schemas which enable generalization to larger resolutions. We think there's more here than just sequence packing, and believe as you pointed out there's a lot which would be of interest to the academic community.
2a. `Despite the performance, pretraining on JFT-4B is very expensive, making it difficult for the subsequent works to follow up and compare with.`
This is a valid concern. We first note that other works demonstrating methodological improvements on JFT, such as the original Vision Transformer paper, opened the door to a plethora of research done at smaller scale (even experiments in the original Transformer paper were also at large scale in NLP). We believe the findings here are transferable to smaller datasets, open the door to additional innovation at smaller scales, and that this will therefore useful at many scales.
That being said, based on Figure 1, the largest relative improvements seen are at smaller training schedules, which is promising for smaller scale research. We also note that we showed NaViT's techniques are useful during downstream finetuning, which are a much lower compute setup.
`It would be better for the authors to include experiments of pretraining on ImageNet-1K.`
That being said, we have been working on pre-training with public datasets, in order to develop a more reasonably reproducible setup and open-source models. Prior work [1] has shown that performant ViT models pre-trained on ImageNet1k and ImageNet21k necessitate careful augmentation and regularisation. However, many of these techniques are very tailored to square images; for example, it’s unclear how mixup generalises to images of different sizes and shapes. It therefore needed more work than anticipated to adapt these techniques to NaViT.
That being said, we ran some experiments, and when training NaViT-B/16 on ImageNet-1k, we can already match ViT’s performance with less than half the compute budget; see [this figure](https://i.imgur.com/KN3EOly.png) for initial results. This is work in progress, which we aim to finish for open-sourcing code around the conference, but cannot promise that it will be ready for the camera ready deadline. Nonetheless, as discussed, learnings from JFT have been known to transfer well to new setups, and trigger further research at smaller scales. Thus we believe the paper in its current form is still of significant value to the wider academic community.
[1] How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers, Steiner et Al, TMLR2022
3. `It is not clear how the memory cost would be under the proposed Patch n’ Pack.`
Regarding memory costs, we have an initial discussion on this in Figure 4, though we appreciate in hindsight this is a bit abstract.
Memory-wise, NaViT generally compares favorably. Assuming identical architecture and batch size, memory cost for both approaches is controlled by the sequence length. For ViT, this is set by the resolution. For NaViT, this is controlled by the maximum resolution we want to support. Assuming we keep NaViT's maximum resolution equal to the equivalent ViT baseline, which we would do anyway, they have identical memory costs.
Concretely, if we train ViT-B/16 at resolution 384 on ImageNet-1k, it would have sequence length 576. An equivalent NaViT-B/16 could train on resolutions ranging from 64 to 384; it would have the same sequence length of 576 (and therefore the same memory cost), but it would fit in expectation almost two times as many images. At the same memory cost, NaViT will fit more images (or conversely, it can fit the same batch size, at a smaller memory cost).
We believe we may have originally overstated the memory costs of NaViT, so we will clarify this in the paper.
Many thanks again for time spent on this review; we hope this and the updated manuscript clarifies some aspects you raised. | Summary: This paper focuses on adapting the computer vision model to flexible usage. The authors stand from the ViT architecture and exploit its flexible sequence-based modeling to enable arbitrary resolutions and aspect ratios. The proposed NaViT could benefit the downstream tasks of object detection, image, and video classification. Evaluations on typical ViT tasks show the performance on different downstream tasks.
Strengths: Exploiting the flexible sequence-based modeling of ViT models is interesting. This paper uses a simple but effective idea to make the image preprocessing match arbitrary resolutions and aspect ratios. The idea is motivated by convincing preliminary experiments. The performance evaluation also presents useful insights into utilizing NaViT’s property.
Weaknesses: The architecture design of NaViT and its essential components to extract visual features could be introduced to help the readers better understand the techniques.
Although the authors claim the proposed NaViT can be applied to different downstream tasks, the experiments are mainly based on high-level tasks of classification and detection. I am interested in NaViT’s generalization to more low-level tasks, such as super-resolution with arbitrary scales and pixel-level segmentation.
The additional overhead of introducing NaViT as the image preprocessing could be reported.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the suggestions in the weakness part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for your review and critiques; we have made a few updates to the manuscript based on your feedback, and hope it helps address some of the mentioned weaknesses.
We will discuss them in more detail now:
1. `The architecture design of NaViT and its essential components to extract visual features could be introduced to help the readers better understand the techniques. `
Aside from positional embeddings, the model architecture is functionally identical to the Vision Transformer. Given its widespread usage, we didn’t repeat ViT's architecture design and instead showed the differences introduced by our approach in Figure 2. We made this point more clear on the revised version of the paper and more explicitly directed readers to the Vision Transformer paper for further details on the architecture.
2. `Although the authors claim the proposed NaViT can be applied to different downstream tasks, the experiments are mainly based on high-level tasks of classification and detection. I am interested in NaViT’s generalization to more low-level tasks, such as super-resolution with arbitrary scales and pixel-level segmentation.`
Good point! In the meantime, we have explored semantic segmentation on ADE20k, finetuning the L/16 models presented in Figure 1. The benefits of NaViT transfer naturally to this setting; for example, NaViT finetuned at resolution 384 outperforms ViT finetuned at resolution 512, while finetuning twice as fast (e.g. see [this figure](https://i.imgur.com/hQK7DCq.png) for segmentation results). We have added this figure and discussion of the results to Section 3.6 of the updated manuscript.
3. `The additional overhead of introducing NaViT as the image preprocessing could be reported`
We assume the reviewer refers to the cost of image preprocessing. During training, as is common, the data preprocessing occurs on CPU. Typical for deep learning models at this scale, preprocessing is significantly faster than the time for a single training step (models are not input bound), and therefore there we observed no training time overhead of using Patch 'n' Pack.
After training is complete, the resulting model is architecturally the same as ViT, except that it performs better, and generalizes better to new image sizes. But there is no additional overhead of running the model compared to an equivalent sized ViT.
We hope these comments and the updated manuscript help address the mentioned concerns, and thank the reviewer again for their time!
---
Rebuttal Comment 1.1:
Title: Post-rebuttal comments
Comment: Thanks for answering the questions. My concerns have been addressed.
I think it is a good paper and I have increased my rating. | Summary: This passage discusses the common practice of resizing images to a fixed resolution before processing them with computer vision models, which is not optimal. The author introduces a new model called NaViT (Native Resolution ViT) that takes advantage of flexible sequence-based modeling and allows for processing inputs of arbitrary resolutions and aspect ratios with adaptive positional embeddings. NaViT uses sequence packing and token drop which improves training efficiency for large-scale supervised and contrastive image-text pretraining. The author believes that NaViT represents a promising direction for ViTs and offers a departure from the standard input and modeling pipeline used by most computer vision models, which rely on CNN-designed approaches.
Strengths: - The authors present a simple method that significantly enhances the training efficiency of the vanilla ViT model, as evidenced by the results displayed in Figure 1. The observed improvements are noteworthy and suggest potential for practical application.
- The authors also make a compelling argument, supported by Figure 3, that the conventional practice of resizing or padding images to a fixed size, which has been historically associated with convolutional neural networks, is flawed. Specifically, the authors demonstrate that both resizing and padding can lead to suboptimal performance and inefficiency, respectively.
- While the overall algorithms are relatively straightforward, they are clearly depicted in Figure 2.
Weaknesses: - The authors have presented a comprehensive set of experiments to demonstrate their results. However, I would like to raise a concern regarding the absence of comparison with the related works discussed in Section 4. This omission makes it challenging to evaluate the actual improvements over the baseline methods in a fair manner.
- To address this issue, I strongly encourage the authors to provide a detailed discussion on the primary contributions of their proposed methods. It appears that the example packing technique has already been thoroughly discussed in Efficient Sequence Packing [1], and one could simply replace the word tokens with image patches to form the proposed method. Moreover, apart from the example packing, the main difference between "Patch n'Pack" and Pix2struct [2] seems to rely solely on the construction of positional embedding. Additionally, it is worth noting that the most recent work [3] also aims at mix-resolution tokenization. While the proposed method may have some differences from existing works, the current manuscript fails to clearly establish the novelty of this paper.
### Reference:
- [1] Efficient sequence packing without cross-contamination: Accelerating large language models without impacting performance. Submitted to ICLR2022
- [2] Pix2struct: Screenshot parsing as pretraining for visual language understanding. Submitted to ICLR2023
- [3] Vision Transformers with Mixed-Resolution Tokenization. CVPR2023w
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - The authors have made an attempt to demonstrate the relative improvements over competitive counterparts in Figure 10. However, the differences between factorized position embeddings and NeRF appear to be marginal, with a range of less than $\pm 0.2\\%$.
- I would appreciate further clarification on the statement "This enables variable aspect ratios, with resolutions of up to $R=P\cdot \text{maxLen}$". Given that Pix2struct only introduces positional embeddings with the size of $[\text{maxLen},\text{maxLen}]$, and "indexed with (x,y) coordinates of each patch", I am curious how the resolution can be enlarged to $P\cdot \text{maxLen}$ without patch embedding in the size of $P\times P$.
- The authors have discussed several position encoding methods, but I remain unconvinced that any specific formulation significantly contributes to the performance improvements over the others.
- Could the authors provide additional details on the settings of "the total number of pixels can be resampled while preserving the aspect ratio"? While I agree that "for NaViT a resolution of “128” has the same area of a square 128 x 128 image, but could be 64 x 256, or 170 x 96", I am curious as to why the "effective resolution" of $64 \times 256$ is larger than $128 \times 128$.
- The authors have thoroughly discussed the benefits of the variable resolution, sampling strategies, and token dropping. However, none of these concepts are first proposed in this work. While the details may differ from existing works, such as the resolution sampling strategy, the fundamental idea remains the same. Therefore, the novelty of this paper appears to be limited.
- The relationship between "Pack n'Patch" and "Self-attention cost" appears to be weak, as the packing of multiple patches from different images into a single sequence does not increase the cost compared to the original single sequence scheme. We kindly request the authors to provide further clarification on why the attention overhead should vary with the number of patches packed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: I found no potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and comments.
We first address the weaknesses section, and in particular concerns around lack of novelty.
*Weaknesses*
Sequence packing, variable aspect ratio, and token dropping are indeed not new concepts, and have independently been studied. This is not true for variable resolution; multiresolution models are common in the literature, typically for dense prediction tasks, but to our knowledge this is the first work to explore variable resolution to speed up training (we also note that your reference [3] was uploaded one month after the NeurIPS submission deadline).
As well as this, our major contributions are:
* Introducing sequence packing to computer vision (Patch n’Pack), and demonstrating that it combines very well with multiple recent advancements in computer vision (token dropping and variable aspect ratio).
* Introducing variable resolution as a way to speed up training.
* Demonstrating that Patch n’ Pack enables a far richer design space for these computer vision techniques than was previously possible (e.g. sampling per-image resolutions and token dropping rates), and showing initial benefits from doing so.
* Introducing positional embedding methods which demonstrably improve generalization to new resolutions.
We believe this will be of great interest to the academic community, change the de-facto non-packed method of training vision models, and meets standards of novelty for publication.
*Questions*
`...the differences between factorized position embeddings and NeRF appear to be marginal, with a range of less than 0.2%. `
`...remain unconvinced that any specific formulation significantly contributes to the performance improvements over the others.`
Figure 10-left presents the difference between positional embedding methods, and the gaps between different approaches are indeed small.
However, this is for the evaluation in the “in-distribution- setup”, where the train and test resolutions are in a similar range.
The differences between different methods are more significant in out-of-distribution resolution evaluations (Figure 10 right); this is the main reason we propose the alternative approaches. We added a paragraph to the positional embedding section to clarify this.
The benefits of our approach will hopefully also become clearer below in our response to the next question.
`...further clarification on the statement "This enables variable aspect ratios, with resolutions of up to R=P. Given that Pix2struct...`
During training there is a maximum sequence length. As an example, say it was 9 tokens; with an example patch size of P = 10, this corresponds to a 30 x 30 image (or 10 x 90, or 20 x 40 with one padding token, etc).
For Pix2struct we would correspondingly `max_len = 9`. It uses a fixed grid of 2D embeddings of size `9 x 9`. During training, only images with a sequence length ≤ 9 are seen. This means only positional embeddings where $x \times y \leq 9$ are actually trained; the rest remain randomly initialized. At inference time, given a larger image, most combinations of $(x, y)$ will not be trained as any larger image will have some $x, y$ where $x\times y > 9$. The pix2struct approach therefore cannot generalize to larger images; we have prepared [this diagram](https://i.imgur.com/8MWdkZK.png) to hopefully also help.
Our factorized approach uses separate positional embeddings for X and Y coordinates. In the above example, each of these positional embeddings is of length `9`. All positional embeddings are used during training, which means when running inference on larger images, no untrained positional embeddings are used. It can e.g. generalize to a 40 x 40 image, as it saw 20 x 40, and 40 x 20 during training, even though a 40 x 40 image would never be seen during training as its sequence length of 16 exceeds the max length of 9. [This diagram](https://i.imgur.com/hjKnvSB.png) visualises this.
We hope this clarifies the setup!
`...provide additional details on the settings of "the total number of pixels can be resampled while preserving the aspect ratio...`
By effective resolution, we mean: $\sqrt(\mathtt{num\textunderscore pixels})$.
The effective resolution of 64 x 256 is the same as 128 x 128, as they both have the same number of pixels (64 * 256 = 128 * 128 = 16384).
What we discussed there is the difference in approaches to resizing. Typical computer vision pipelines would squash an image to square, and then resize to desired resolution $R$.
We instead resize the image such that it has (roughly) $R^2$ pixels. This has the same number of pixels, but retains the aspect ratio. It is important to have the same number of pixels, as this controls the number of patches and therefore the compute cost, and allows for comparable evaluations.
`...Therefore, the novelty of this paper appears to be limited.` We hope this question is well addressed by the first response!
`The relationship between "Pack n'Patch" and "Self-attention cost" appears to be weak, ...`
Self attention cost scales quadratically with sequence length $n$.
With $B$ images, each of sequence length $n$, the cost is roughly $O(Bn^2)$.
If instead we pack $k$ images per sequence, increasing sequence length to $kn$ but processing a smaller batch of size $\frac{B}{k}$, the cost would be $O(\frac{B}{k} \times (kn)^2) = O(Bkn^2)$.
This extra factor in the cost is broadly speaking the concern around the self attention cost of Patch n’ Pack. However, we note two things:
* The self attention cost of a transformer model becomes an increasingly small proportion of overall costs as transformers scale up. This is what is shown in Figure 4.
* We don’t actually have to increase the sequence length to benefit from NaViT. With the same max sequence length, NaViT can mix in images at smaller resolutions (less tokens than n) and still enjoy the increased throughput.
We hope we have addressed some of the concerns and the updated manuscript is to a satisfactory satndard!
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thank you for the detailed feedback. After carefully considering the comments from the other reviewers, I still have some reservations about the novelty of the proposed technique. While "explore variable resolution to speed up training" may be new in the literature on vision transformers, I am not fully convinced that combining sequence packing, variable aspect ratio, and token dropping should be considered a significant contribution to the field.
However, I must acknowledge that the authors have addressed all of the concerns raised in the rebuttal, and there are no apparent flaws in the proposed method. Therefore, I raise my previous rating to borderline.
I would like to note that the authors claim that "Introducing variable resolution as a way to speed up training" is one of the major contributions of this paper, and that this technique has been discussed on CNN-backbone as a group of Resolution-level Dynamic Networks.
Finally, I would encourage the authors to provide further reflection on the response to the last question in the final version of the paper. Without this clarification, it may be difficult for readers to fully understand the trade-offs involved in using the proposed method, and the paper may not be as impactful as it could be. | Summary: The authors propose to use example packing to train ViTs, where training examples of various lengths are packed into a single sequence. This requires a few straightforward architecture changes, including modified attention masking, pooling, and positional embeddings. This scheme allows for some interesting ideas, such as variable image resolutions during training, variable token drop rates, and adaptive inference time computation.
Extensive experiments in the paper show that NaViT results in more efficient training and finetuning (in terms of TPU hours) and a better compute-performance tradeoff at inference time. Further analysis shows that mixed resolution training is beneficial to model performance, that the time-varying token drop ratio allowed by the model can improve results when the correct schedule is used, and that the proposed factorized positional embeddings can offer very good generalization to aspect ratios and resolutions unseen during training. Finally, additional experiments show promising behavior with respect to calibration, fairness, object detection, and video classification.
Strengths: Overall, this is a very comprehensive paper. The number of experiments and the aspects of the proposed models performance evaluated is impressive. The authors touch on a large number of relevant properties of the model, including calibration stability, positional embedding ablations, and out of distribution performance. Additionally, results are quite positive, showing promising results on image classification, object detection, video classification, and training efficiency. Finally, the appendix is thorough, and gives enough details to accurately reproduce all experiments from what I can see.
Weaknesses: According to appendix section B.1, classification experiments seem to be "compute-matched." Are there any experiments analyzing the asymptotic performance between ViT and NaViT, or can the authors discuss this? The experiments in the paper seem inconclusive on this point.
The question of compute matching also applies to downstream experiments. For example, in the fairness analysis, are compute matched ViT and NaViT being compared? If so then the conclusion from Appendix H that "native image resolution improves the performance of fairness signal annotation" may not hold.
NaViT seems to benefit from training at the original aspect ratio, and at variable resolutions. However, could either of these benefits be achieved through scale and crop data augmentation? What data augmentation is used for ViT? Additionally, I was under the impression that image stretching or shearing could be useful as a data augmentation. Could the authors please discuss this?
Where are the contrastive results? I might have overlooked something, but I can't find zero-shot imagenet or COCO image-text retrieval results as discussed on line 154.
Minor typos:
- Figure 2 typo: "image 2" is repeated twice in the "data processing" part
- Figure 9 is before Figure 7 and 8
- It seems that a lot of (if not all) appendix section references are incorrect. Examples of these typos occur in line 152, line 158, line 339, and line 120.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: See weaknesses section, thank you.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: I think the "limitations" section from the appendix could be more forthcoming and candid. It mostly just says "NaViT" is a great idea, but we didn't get to apply it to all the tasks we wanted to. It could benefit from a more honest discussion of the method's limitations, such as compute overhead from packing or additional architectural changes needed to support example packing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Firstly, many thanks for your time and detailed feedback. It was very useful and constructive, and we made multiple improvements to the manuscript based on your comments. Please see below for some detailed responses to the weaknesses you pointed out.
1. `Compute matching and asymptotic performance`
This is a good point; based on current experiments, we believe a lot of the observed performance benefits are due to speeding up training with NaViT (i.e. seeing more images in less time).
We introduce [this figure](https://i.imgur.com/Rpn4GBU.png) which augments Figure 1. In the center, we can show that per-token seen, NaViT trains much faster than ViT (and per TPU-hour, left), and this is because Patch 'n' Pack enables NaViT to see tokens from more unique images (figure, right).
Comparison of asymptotic performance can be difficult. First, when training on these large pre-training on large datasets, it is prohibitively expensive to train to convergence, so more training always yields improvements. Therefore it is common for foundation models to compare at a fixed (often large) training budget as we show above. Even at the largest budget ( a large 2000+ TPU chip hours), NaViT is still far ahead.
Further, in the infinite compute limit, for a given number of images seen, it is best to train on purely large resolution, without token dropping. This is not a setting that we explore since it substantially slows down training. Asymptotically, we therefore expect the token dropping and mixed resolutions to eventually hurt NaViT, although we believe that it would be very expensive to approach this regime.
We have added a new section (Appendix B.5) to the appendix with this figure, discussing asymptotic performance, and added pointers to it in the introduction, in the updated version of the paper.
2. `Compute matching downstream experiments`
It would indeed be unfair to compare non-compute matched models, since pre-training on large datasets (like JFT) for longer always improves performance. Downstream experiments were all done using models pre-trained for the same amount of time, and evaluated at the same effective resolution (number of tokens) - i.e. compute matched for both training and eval cost. Specifically, *all downstream experiments used the top-rightmost points in Figure 1 (ViT-L/16 and NaViT-L/16)*. We have added a paragraph to the experimental setup to make this point clear. Many thanks for raising this.
3. `NaViT seems to benefit from training at the original aspect ratio, and at variable resolutions. However, could either of these benefits be achieved through scale and crop data augmentation?`
We do not believe aspect-ratio conservation provides an augmentation effect. It may interact with other data augmentations; e.g. inception-cropping samples a rectangular bounding box, and then squashes it back to a square. This may work better with aspect-ratio preserving models.
Variable resolution and token dropping could indeed act as data augmentation; we did not conclusively demonstrate this in the paper, since in the large-scale pre-training regime we explored typically doesn’t benefit significantly from data augmentation.
We are actively exploring this in smaller-scale settings; this is however quite an undertaking. Many of the data augmentations/training techniques which enable training ViT on small-scale models, such as mixup, require some rethinking when dealing with non-square or variable resolution images. We therefore leave research on data augmentation for Native resolution vision models to future work.
4. `What data augmentation is used for ViT? Additionally, I was under the impression that image stretching or shearing could be useful as a data augmentation. Could the authors please discuss this?`
We follow the original Vision Transformer paper, and follow-ups; models are trained on images with resolution 224×224, with inception crop followed by random horizontal flipping. Pre-training on JFT and large image-text datasets does not benefit much from heavy data augmentation as discussed above, and it isn’t common practice to use image stretching or shearing; this is also the case for other computer vision works with large datasets e.g. CLIP. We added a small section to the appendix (Appendix B.4) with these details.
5. `Where are the contrastive results? I might have overlooked something, but I can't find zero-shot imagenet or COCO image-text retrieval results as discussed on line 154.`
Apologies, we realise in hindsight these weren't clearly highlighted. A few of the experiments were performed in a contrastive setup, e.g. Figure 7, and the contrastive-pretrained models used to initialize the detection OWL-ViTs in Section 3.6, where zero-shot ImageNet accuracy was reported for both. We will make this more clear for the camera ready version.
6. `Minor typos` many thanks for noticing these, we have corrected them in the uploaded pdf!
7. `Honest limitations section`
Thanks for raising this. We have updated the limitations section to:
* Discuss compute overhead, referencing figure 4 and related discussions.
* Touch on the complexities of implementations/architectural changes.
* Mention what we discussed above about the unexplored data-augmentation angle.
* Mention some limitations relating to small-scale experiments.
Thanks again for your insightful review; we hope these have answered your questions, and that the updated manuscript reflects the addressed feedback.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions and updating the manuscript where appropriate. The response has cleared up some of my confusion and has resolved all of my concerns satisfactorily.
In addition, I have read the other reviews and corresponding rebuttals, and I am of the opinion that the author has given reasonable responses to points brought up by the other reviewers. In particular, two reviewers are concerned about the novelty of the work. On the contrary, I agree with the authors that while sequence packing, variable aspect ratio, and token dropping have been proposed before, their combined use in this work is novel (i.e. the fact that sequence packing can quite naturally enable variable resolution or aspect ratio training and dynamic token dropping).
Overall, I still think the paper is a good submission and I want to keep my rating as is. I would be happy to discuss additional questions or points that the authors or other reviewers may have. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time and valuable comments. We were happy to hear that they found the paper “*very comprehensive*”, leading to “*impressive performance*” (WGZP), making a “*compelling argument*” that we need to go beyond fixed-size resolutions (CXkP), based on an “*interesting idea*” and backed up with “*useful insights*'' (z6Gn) that leads to “*impressive efficiency gains*” as a result of solving a “*fundamental research problem*” and, finally, that the paper is “*easy to follow*” due to its “*great presentation quality*” (SsJZ).
We have made multiple modifications based on the feedback, including clarification of text, fixing typos, adding semantic segmentation results, and adding/deepening discussions on asymptotic performance and limitations.
Pdf: /pdf/068fd826c8db1b0ac9f11e8e2f82e790e27c2337.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Chanakya: Learning Runtime Decisions for Adaptive Real-Time Perception | Accept (poster) | Summary: This paper proposes a learned approximate execution framework named Chanakya. Chanakya considers intrinsic context like images and extrinsic context like latency and predicts runtime decisions. The reward function helps to learn a better trade-off online. Extensive experiments on the Argoverse-HD dataset show that Chanakya outperforms static policy and can be applied to new hardware and action spaces.
Strengths: 1. This paper jointly optimizes accuracy and latency for real-time tasks. The reward function proposed considers both the simultaneous optimization between accuracy and latency, and different characteristics of the video sequence.
2. The detailed analysis of experiment results demonstrates the performance improvements of Chanakya.
3. This paper is well-organized and well-written to read.
Weaknesses: 1. Some settings of experiments are not clarified, like offline upper bound and RL training method.
2. Some details of learning the controller should be introduced in the main part of the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why did employing L_{streaming} not converge which is mentioned in Line 165?
2. In table 1, how do you get sAP for offline upper bound?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors just mention that their method might be employed to improve unethical applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thought provoking review, we shall incorporate the suggested point in the final manuscript!
1. Thank you for the upper bound suggestion, please see main comment.
2. Details of controller training method are provided in the supplementary along with the anonymous code link, please let us know if there are specific questions that need clarification.
3. In both the datasets, the number of videos available for training is of the magnitude of hundreds (say N). L_{streaming} yields only one reward value per video execution (thus one epoch → N samples for training) and the model performance was worse than static policies after 5 epochs. To circumvent this, we essentially chunked every video into temporal segments (essentially R_{1}) yielding far more samples for training ((T/30)*N samples where T is the average length of the video).
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I keep my score as 6. | Summary: This paper introduces Chanakya, a framework for computation planning in real-time (streaming) perception. Chanakya uses a novel reward to make run-time decisions based on content- and system-based characteristics, simultaneously optimizing for both accuracy and latency. The proposed controller is learned on individual frames and uses a normalized reward to reduce bias. Chanakya’s performance is evaluated on a real-time perception stack consisting of a detector, scheduler, forecaster and tracker; these individual components are made part of the search space (eg: choice of model), along with their associated configurations (scale, #proposals, etc.). Results are reported across edge devices, runtime contention levels, and for cases where the perception stack has been migrated from one GPU to another.
Strengths: * This is a well-written paper, and the core ideas have been explained clearly.
* Impact & relevance: the paper targets real-time perception, which is an important component of modern AV and embodied systems.
* Strong results: Chanakya appears to outperform SOTA static and dynamic approaches, and ones designed by domain experts.
Weaknesses: * It is not clear how well Chanakya scales with increasing search space size. While the paper includes an evaluation of a reasonable number of decision dimensions, it would be interesting to get an understanding of Chanakya's scalability limits, especially since the authors claim in Section 2 that the search space could potentially be expanded by trying to account for varying input resolutions or via techniques such as pruning and/or quantization.
* No comparisons to non-RL-based approaches, including rule-based and purely latency-focused systems.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: The comparison in Figure 1(c) is to RetinaNet, which is a network from 2017. Do more modern networks (eg: ConvNeXt/Swin, CoAt-Net, MaxViT, etc.) fulfill the real-time/streaming constraint by being able to handle higher resolutions?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations have been adequately addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review, we shall incorporate these points in the final version of the manuscript.
1. **Increasing Search Space:** We agree that this is an important direction. However, adding even more decision dimensions – such as edge-cloud processing (should an image be sent to a cloud GPU or executed at edge GPU like Jetson) would be involve development of more complicated simulations (for network variability). We leave this for future work.
2. **Prior work:** The execution model of prior work is different. It’s difficult to compare as every algorithm needs to be modified for a fair comparison. We did include one such baseline AdaScale [2] (and modified the scheduling algorithm) – which is not RL based, which reduces image scale and solely optimizes latency by proxy. Performance is worse by 35%.
3. **New Models:** We are looking to develop execution policies that are agnostic to the model choice and hardware choice. Recent work has shown models with better accuracy-latency tradeoffs (like, StreamYOLO, YOLO-X), however, the fundamental tradeoff remains. Models like StreamYOLO do satisfy the real-time constraint on high end GPUs like V100, but are not hardware-agnostic, due to design limitations performance suffers on older and edge hardware [13].
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I will keep my score the same. | Summary: The paper looks at the problem of unpredictable compute requirements for real-time perception. It addresses this in term of a multi-objective optimisation problem (quality of results and latency). The authors propose using RL in order to optimise the selection of various characteristics in order to achieve an optimal result.
Strengths: The paper is well written and provides a good introduction for the reader and guides them through the work.
Weaknesses: The proposed approach does not come across as that novel. There is a lot of prior use of RL techniques for multi-objective optimisation which the authors don't appear to have looked at.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - The authors spend a lot of the introduction naming and saying how good their approach is. This should be more analytical than what is presented.
- Lines 89-94 give a formal definition of the perception problem. However, most of the introduced terminology does not get used further in the paper. This space could be used for other material.
- Figures 2,3&4 present results, but it is not clear where these results came from and how they were produced.
- "In our experiments we observed that controller collapsed to predict a single configuration when rewards of this form were provided (depending on λ), and did not learn" - which experiments?
-
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Small amount of discussion of limitations in conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your suggestions in the review, they will definitely improve the quality of the manuscript.
1. Novelty: Please see main comment.
2. Thank you for this suggestion. We have added some key numerical highlights in main comment, which we will add in the introduction.
3. Thank you for this suggestion, we shall add a connection to Section 3.4. The setup mentioned in Section 3.1 is necessary for understanding Section 3.4.
4. We have mentioned experimental, implementation and a few algorithmic details in supplementary, we shall add more explanation there. 5. Please let us know what exactly is missing in the results section, we shall do our best to incorporate the suggestion.
6. Apologies for the confusion, we trained the controller with reward defined in [3] and the learnt policy always picked a singleton configuration (like static policy) depending on the hyperparameter \lambda. Previous work [3, 21] data is not public, so we could not compare directly on their data. Hence, we omitted the comparison in our results.
---
Rebuttal Comment 1.1:
Title: Thanks for the comments
Comment: The authors have addressed a number of the concerns that I had. However, they have not fully addressed the underlying issues. As such I feel my mark should stand. | Summary: This work provides a novel learning-based approximate execution framework to learn runtime decisions for real-time perception. The learned controller proves to be efficient and performant, which appears to be useful for many real-time perception applications in the cloud and edge.
Strengths: 1. The focus of the work on real-time perception system's runtime execution decisions is important and timely.
2. The proposed Chanakya proves to learn performant execution policies and can incorporate new decision dimensions and be ported to different hardwares.
Weaknesses: The experimental results are only reported on detection tasks, and other perception task like segmentation might be reported.
The decision dimention seems to be discrete and limited.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How to evaluate and compare between the controllers on different hardware, which can have different features?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As listed in weakness part, more experimental results on other perception task can be reported, and not sure whether the decision space is valid.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review, we are glad to see that you agree that Chanakya has been proved to learn performant execution policies in a variety of scenarios.
1. **Single Task Experiment:** We performed a thorough study using one task – detection, but across datasets, scenarios and edge and cloud devices. Critically, runtime characteristics of instance segmentation models (e.g. Mask R-CNN) are like detection models (e.g. Faster R-CNN) [6]. For experimenting with instance segmentation, we need to additionally define different set of context functions and a more complicated forecasting mechanism which further complicates experimental design.
2. **Discrete Dimensions:** Most of the prior work also operate on discrete decision dimension(s), usually one such dimension.
3. **Evaluation across hardware:** Could you please clarify the question regarding comparing controllers on different hardware? What is to be compared?
---
Rebuttal Comment 1.1:
Comment: Thanks and I will keep the score as is. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful comments. We are glad to see that reviewers noted that our paper is “well written” (R4, R5, R6) and “well-organized” (R6) as we tackled an “relevant and impactful problem” (R5) with a “clear motivation” (R3). It is heartening to note that our “learned execution framework for real-time perception” is “proved to learned performant execution policies” (R4) through “detailed experimental setting” (R2), showing “strong results” (R5) by “demonstrating the performance improvements” (R6) while considering “both intrinsic and extrinsic” (R1) environment contexts.
### Contribution
We propose a novel learned framework to learn execution decisions for real-time perception, optimizing accuracy and latency simultaneously in a variety of scenarios. Detector/tracker etc are **not retrained**, improvements are **only from learned execution policy**.
1. We obtain improvements of 17% on Argoverse-HD and 9% on Imagenet-VID for a pre-defined perception system. This improvement is complementary to the components and is better than the best static policies found by benchmarking and domain experts.
2. Our method can improve performance in a variety of hardware and scenarios. E.g., when environment has process contention, our execution policy degrades better than static policies. Our algorithms (like scheduler) are not modified to account for new considerations, unlike prior work.
### Novelty (R2/R4)
While finding good runtime decisions through execution frameworks, and improving the possible pareto frontier through better models for real-time perception has been studied, however,
1. Prior work do not learn the execution framework and thus generally operate on **one** consideration or decision dimension – either intrinsic or extrinsic. In reality, real-time perception systems have many considerations. Thus, they need to modify algorithms, e.g. the scheduler to incorporate any new consideration (and model the interactions of every consideration with others; or come up with a new heuristic). We claim novelty and show advantages of our learned execution framework.
2. Prior work optimize suboptimal proxy metrics. For example, optimizing scale solely [2] (a proxy for latency itself) drastically reduced performance. We claim novelty of the objective to optimize and the design decision to situate our execution framework in the streaming perception problem [6] which led to this natural objective.
### RL Novelty/Vanilla RL (R2/R4)
Proposing new RL algorithms is not the focus of this work and there might be other RL algorithms (like Actor-Critic Methods) that are also applicable. Due to resource constrained nature of real-time perception, we formulate an RL setup that does not hamper performance over static policies.
### Offline Upper Bound (R6)
We take the best configuration of the static policy from [6] and simulate a detector with 0ms latency for every frame and obtain an upper bound. We shall mention this in the camera ready version.
Denoting R1 to 6 for Y14P, L1Mt, fYnV, Gsqp, 7j3D, 6nGS respectively. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The author designs Chanakya, a learned execution framework for streaming perception that jointly optimizes accuracy and latency. To achieve so, the framework captures both intrinsic and extrinsic contexts and utilizes a novel reward function to train a learned controller.
Strengths: 1. The motivation is clear and the discussion of related work is comprehensive.
2. The description of experimental setting is detailed and results of ablation studies are provided.
Weaknesses: 1. Novelty is somewhat limited. The methodological innovation in this paper is limited to use both intrinsic and extrinsic contexts to better learn runtime decisions with reinforcement learning based methods. The fundamental methodology is mostly shared with prior work.
2. Presentation can be improved. Table 1 is at the top of page 6, but no context related to it appears until page 7. Asynchronous Training and Training for edge devices in Section 3.5 seems unnecessary to discuss.
3. Baselines are incomplete. I think the experiment should at least have vanilla RL-based baselines and then show the effectiveness of the proposed framework step by step.
4. I'm not sure it's a good idea to incorporate all factors into a learning controller. For single GPU program, I believe rule-based decision algorithms can adapt extrinsic factors (e.g. software and hardware status) very well.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: How does your approach compare to hybrid solutions that employ handcrafted rule-based heuristics for extrinsic factors and learning-based decision algorithms for intrinsic factors?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: In the subsection "Limitations and Broader Impact", the authors point out that their framework can be used to deploy unethical applications, and I think this is just a potential negative societal impact, not a limitation. Therefore, I encourage authors to talk about the limitations of your work and potential solutions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review, it has helped us clarify aspects of our work. We believe the raised concerns are addressable in the manuscript.
1. Please see the main comment for novelty concerns.
2. We shall improve the presentation of the work and move the tables and add a high level overview diagram.
3. **Vanilla RL:** Please see main comment.
4. **All Factors:** This is a problem specifically on edge devices, where users have multiple applications running on device simultaneously. While rule based decisions for this consideration do appear to work [4], however, even methods like ApproxDet hypothesize using NNs to infer configurations and RL to learn scheduling due to its distinct advantages.
5. **Hybrid Methods:**
- We compare with one such baseline for heuristics with a learned metric – AdaScale [2]. The rule is greedily accepting the scale value from the learned scale predictor for the next frame.
- No other entirely learning-based real-time decision algorithms or “handcrafted rule-based heuristics for extrinsic factors and learning-based decision algorithms for intrinsic” exist as far as we are aware. We present first such general framework.
6. **Limitations:** (a) Our training and test environment is the same and we haven’t tested the robustness (and adverserial robustness) of the learnt policy. With the vast literature on RL in domain/environment randomization, we expect that this limitation will be resolved in future work. (b) We considered context to be instantaneous w.r.t to a frame, however, for some considerations like power and temperature, accounting for context over a longer time horizon is generally required [Chen2021].
[Chen2021] Enforcing Policy Feasibility Constraints through Differentiable Projection for Energy Optimization, e-Energy, 2021
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors and other reviewers for their replies and discussions. I read them carefully and it helped me understand the work better. I raised the rating to 5 and lowered the confidence to 1. | Summary: The paper presents Chanakya, a learned framework for real-time perception that automatically balances accuracy and latency. Unlike previous fixed rule-based methods, Chanakya considers both intrinsic and extrinsic factors and is trained to make flexible decisions. evaluations show that it outperforms existing execution policies on both server GPUs and edge devices with low overhead.
Strengths: 1. It proposes a learning based approximate execution framework to learn runtime decisions such as the resolutions of input, which model to use, etc.
2. The proposed framework considers multiple context both intrinsic and extrinsic to improve the reliability of the runtime decisions.
Weaknesses: 1. It is unclear how generalizable is the trained policy. More discussions on when train from scratch and when transfer learning and the corresponding cost when the system is in a different environment is needed.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. It would be good to have a diagram figure to show the high level overview of the whole framework.
2. There exists some work that use elastic model setting for different scenarios such as [1]. More discussions and comparisons are needed.
[1] Wang, Chien-Yao, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. "Scaled-yolov4: Scaling cross stage partial network." Proceedings of the IEEE/cvf conference on computer vision and pattern recognition. 2021.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review!
1. We train the controller from scratch depending on the conditions simulated. We leave robustness as future work, as large number of inferences from existing RL works can be drawn to improve this aspect (such as domain randomization). Cost of training the policy is a fraction of training the detection models themselves (a 4 layer MLP for 10 epochs).
2. Thank you for this suggestion! We shall add a high-level overview diagram in the camera ready version of the paper.
3. The paper (Scaled-YOLOv4) suggested is interesting, structure of the model is search for apart from network depth and width to obtain a family of models with good accuracy-latency characteristics. Critically, that paper optimizes decisions taken during training a family of detectors, which is not our focus. We focus only on runtime decisions and we can learn to select from a slew of already trained models (such as family in the mentioned paper) for a given hardware/environment. We shall mention it in related work.
---
Rebuttal Comment 1.1:
Comment: I appreciate the answers from the authors. I'd like to keep my score. | null | null | null | null |
Fine-Tuning Language Models with Just Forward Passes | Accept (oral) | Summary: Fine-tuning with backpropogation becomes infeasible for very large language models because it uses too much memory. While zeroth-order optimization uses far less memory and could in principle fine-tune the model with just forward passes, past theory suggested that the learning rate must scale down with the number of parameters, making convergence prohibitively slow. However, this paper finds that zeroth-order optimization actually performs quite well and converges quickly even on very large language models. They provide theory to explain this fast convergence, where they show that under an assumption they call "low effective rank," the learning rate scales down with the rank rather than the number of parameters. They also provide a memory-efficient implementation of zeroth-order optimization that they call MeZO, along with memory efficient zeroth-order versions of SGD with momentum and Adam. In experiments, the method performs similarly to backpropogation with 1/12 the memory usage, while outperforming in-context learning and linear probing.
Strengths: (1) The paper is well-written.
(2) The method is simple and easy to understand.
(3) The theory provides useful insights into why zeroth-order optimization works for fine-tuning large pre-trained models.
(4) The experimental results are strong, and the appendix contains thorough ablations.
(5) The idea that zeroth-order optimization works well for fine-tuning LMs seems practically useful and addresses a pressing need in the community for memory-efficient methods.
Weaknesses: The paper seems strong overall, and I support its acceptance regardless of whether the suggested experiments below are run or not during the rebuttal period.
(1) From what I understand, the paper does not verify the low effective rank assumption empirically, nor is it verified in the papers cited (which either study the effective rank / Hessian spectra in non-LLMs, or study the LLMs but not the effective rank and instead study the intrinsic dimensionality of fine-tuning). Therefore, to justify the assumption, it seems useful to study the Hessian spectra of the downstream fine-tuning loss in LLMs, at whatever size is feasible.
(2) Related to (1), to verify the theory and confirm that the effective rank is indeed the quantity that determines convergence rates, it seems useful to run simulated experiments where one constructs a synthetic model + data and varies the effective rank, and examines whether the convergence rate or gradient norm scales with the effective rank as predicted in the theory.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: (1) While MeZO is much more memory efficient, is it slower than backprop in terms of wall-clock time? (The appendix does state that FT used 1K steps while MeZO used 100K steps in the experiments. How much faster is each step of MeZO compared to each step of backprop?)
(2) It's a bit surprising to me that MeZO, MeZO-prefix, and MeZO-LoRA optimize at similar speeds (Figure 5 in the appendix). Do the backprop versions also optimize at similar speeds? And do the three parameterizations actually have similar effective ranks empirically?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Can we study the effective rank assumption in some language models?**
It is difficult to translate results on very small models, on which we would be able to measure the effective rank, to the large ones that we would find MeZO useful for. We would also likely need to pre-train these very small models ourselves, which is expensive, and they may be so small that they do not achieve meaningful results on the benchmarks we study.
**Can we use simulations to verify the theoretical convergence analysis?**
Thanks for the suggestion! We added the simulated experiment in our attached PDF and also reported the results in our general response. In short, we observed that the convergence rate of MeZO does depend on the effective rank in the simulated experiments.
**What is the wall-clock efficiency of MeZO compared to fine-tuning with backpropagation?**
Please see our general response. In short, MeZO reduces the number of GPU-hours needed to train large models, leading to a 2x GPU-hour reduction on a 30B model compared to Adam fine-tuning.
**Do the backpropagation versions of full fine-tuning, prefix tuning, and LoRA optimize at roughly the same rates, as was observed with MeZO? And do the three parameterizations actually have similar effective ranks empirically?**
We did not measure the empirical effective rank of different methods due to limited compute. We observe that with backpropagation, all three methods converge roughly at a similar speed, with LoRA and prefix-tuning slightly slower on some tasks. The interesting case with MeZO is that classical ZO analyses suggest that full-parameter MeZO would converge much more slowly, but it is not the case empirically. Our theory in Section 4 highlights why the convergence rate does not depend on the number of parameters.
---
Rebuttal Comment 1.1:
Comment: Thanks for the great answers! | Summary: This paper proposes a new zeroth order optimizer, MeZO, for LM training. This is proposed as an improvement to ZO-SGD. The advantage of this approach is a 12x reduction in the amount of memory required for training compared to backpropagation. This enables the training of much larger models.
The effectiveness of MeZO is shown across a range of benchmarks and model sizes. The results compare favorably to linear probing and in context learning.
Strengths: The MeZO technique stands to unlock substantial capability for LM training. This enables training of much larger networks. The compatibility with LoRA, prefix tuning are important use cases for many LM users. There is an ability for optimizing non-differentiable objectives which is compelling, and could be expanded in the future.
The empirical behavior are coupled with a section on theory which effectively describes the both the expected behavior, but elaborates on the expected slow convergence by expanding the theoretical analysis to address low effective rank networks.
Weaknesses: While the analysis refers to the convergence rate of MeZO, there is a very brief treatment of convergence behavior in the paper (Appendix E.2) It might be helpful for this to be expanded and possibly compared to backpropagation, especially in the context of the presentation of Section 4 Theory.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Appendix A (and Section 3) demonstrate that promoting is crucial to MeZO performance. Why is this? Much of the other behavior is supported by a theoretical treatment, but this observation stands out as relatively uninterrogated.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: yes though fairly lightly. The observation about prompts being critical for training may be a limitation for some (new) tasks or datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Can the theoretical convergence analysis of MeZO be compared to backpropagation?**
Corollary 1 directly compares the SGD convergence rate to the convergence rate of MeZO, since the term in brackets in equation 5 is the per-step loss decrease of SGD (see Lemma 1). Two factors make MeZO converge more slowly than standard backpropagation (lines 211-218): (1) MeZO has to be run with a smaller learning rate than SGD in order to reliably decrease the loss at each step, and (2) MeZO reduces the amount that the loss can decrease at each step.
**Why is prompting crucial to MeZO?**
Please refer to our general response. In short, we hypothesize that using a prompt makes the fine-tuning objective similar to the pre-training one, which likely exhibits a Hessian with low effective rank.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses | Summary: The paper present a new optimiser MeZo based on stochastic approximation using gradient perturbation.
This optimiser is very memory efficient as it only requires to perform 2 forward passes with different deltas/epsilons on the parameters and multiple gaussian samplings. These algorithms allows "finetunning" large language models to specific tasks very efficiently ( up to 30B on a single A100) yielding between x4 to x12 memory reductions. Since the proposed algorithm is an optimizer it can be combined with other standard techniques such as LORA or prefix tunning. All this is applied in finetunning setups similar to ICL.
Strengths: The algorithm is a new application of well known stochastic gradient approximation but many times forgotten due to their slowness.
It is very surprising that this algorithm works, and the authors provide a theoretical justification of why this could be working in this case.
They also acknowledge that despite of what it might sound this approach only works in prompting fine-tunning scenario [Appendix B.2].
The results are insight full , the baselines are fair and the theoretical analysis is correct to the best of my knowledge.
This can settle as an alternative to in context learning with prompting, by fine-tuning with a limited set of examples (512).
The proposed technique seems to work on the benchmarks used and seems to surpass other techniques such as Zero-shot, LP, ICL, and approaches very close performance to fine-tunning. This can be set a cheap alternative to ICL for some tasks.
Weaknesses: While there was a huge and titanic effort in the paper there were many questions that steam from the technique.
The first area which has not been explored too much and given as true is the need of having a prompt to apply MeZO.
This key ingredient is not well studied but rather given on some preliminary experiments, e.g. Table 5.
Why is MeZO not working w/o prompts, even some very simple prompts ?
The need of the prompt also raises the question of how this is related to ICL , as there are some works that suggest ICL maybe doing some alike to fine-tuning or back-propagation though the attention weights. Is this combination of prompting and MeZo that is guiding the forward propagations ? is there a mixed cooperation between the prompt and the stochastic technique ? How much of the prompt is needed ?
Another question would be how the different techniques behave as a function of the number of examples k. It would have been nice to see a plot for some models at least between ICL , MeZo, and possible ft on the selected tasks. Why have authors stopped at 512 ? why only 16 or 512 ? there are some dataset that contain more training examples. Why didn't they compare fine-tuning and MeZo in other setups with larger examples ?
There is the relationship between the task itself and the optimiser. It is not clear to me, in which tasks this will work properly. I suspect given the prompting above that this might only work on low-perplexity tasks or task in which prompting or ICL can generate good results and not in other more complex tasks.
Clearly this is maybe too much to address in the paper, but all aspects above point toward the little understanding the reader is left with on under which conditions this technique can be applied. The future work seems to already assume the MeZO algorithm is working and proved, but there is just an hypotheses and a very low link between the experiment conditions and the theory. The link is stablished as "We attribute these phenomena to the Hessian of the loss exhibiting small local effective rank." . It would have been nice to strengthen this connection with some experiments or computation. Could this explain when or how this algorithm is applied ? if we remove the prompt does this increases the H effective rank ? would other tasks exhibit larger ranks ? how can we reduce it for each of the tasks ?
Please, I would kindly ask the authors to read above questions and discussion as a signal of the interest the paper brought to the research field and not as criticism on their very interesting work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: While I have many question none of them affect the paper directly.
It would be nice however if some of the weakness could be discussed by authors:
* in which task will this technique work? do you have a characterisation ? have you tried other tasks where it failed ?
* why the Hessian hypotheses ? is this based on some experimental or preliminar analysis ?
* when ft surpasses this techniques? when we have to ft on hundreds of thousands of examples ?
* why it only works with prompting ?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: While the author do not focus on the limitations, they are more or less clear per previous analysis. The focus of the paper is more on the direction of stressing the surprise of the technique with all known drawbacks is actually workig.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Why does MeZO require using a prompt? What tasks can MeZO work on? Why do you need the Hessian hypothesis (Assumption 1)?**
Please refer to our general response. In short, we hypothesize that using a prompt makes the fine-tuning objective similar to the pre-training one, which likely has a Hessian with a low effective rank. Following this, we agree with you that a task that has a lower perplexity (i.e., a more “natural” prompt) will probably work better with MeZO. Once we added a simple prompt, we did not encounter any tasks that MeZO completely failed to train on.
**How does the empirical success of MeZO interact with the theoretical hypothesis that transformers may simulate fine-tuning on a smaller, internal model during inference time?**
MeZO is a useful tool for fine-tuning currently popular LLMs trained with standard pre-training practices. It is not guaranteed to work for models that are designed or trained in new ways, such as the scenario you mention. It may be the case that the internal model is not stable to fine-tuning the large model. Alternatively, it could be stable to fine-tuning in some way (e.g., analogous to noise-tolerant circuits) that allows it to not be destroyed during MeZO. Overall, we are not sure if currently existing pre-trained models are simulating and updating internal models, so we cannot be sure how such constructed models would behave during fine-tuning, whether it is done with backpropagation or MeZO.
**How does MeZO behave with different numbers of examples?**
Thanks for your question. We will include experiments ablating against different dataset sizes in a subsequent revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for kindly answering my questions. | Summary: The paper proposes an enhanced memory efficient zero-order optimization method named MeZO. MeZO only requires the same memory as inference time and thus can enable model tuning for large LMs with limited memory budget. The authors demonstrate the efficacy of MeZO on multiple NLP benchmarks compared with linear probing, in context learning and fine-tuning in few/low shot learning regime. The authors also provided detailed theoretical proof for MeZO.
Strengths: 0 - The authors targeted a not well understood domain (zero order optimization) and open up new opportunities for future works. In the era of LLMs, this method can enable many future work especially for those who don't have access to large-scale GPU clusters.
1 - Comprehensive experiments and ablation studies on the proposed method.
2 - Strong theoretical support on the proposed method.
3 - Good writing and flow which makes the paper easy to follow and understand.
4 - Well-articulated future work.
Weaknesses: No major weaknesses. I left some comments in the questions section and hope the authors can answer and address.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Can authors also report practical training time compared with standard fine-tuning (e.g., in terms of # steps/ second)?
- Maybe report average numbers as well in experiment results (e.g., Table 1)
- Interested to see how the performance of MeZO compares with fine-tuning in the scenario of full training data (rather than k = 100, 500, etc).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have discussed limitations in the conclusion section and aim to explore them in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your suggestion, and we will report average numbers in the experiment results in the next revision.
**What is the practical training time compared to standard fine-tuning?**
Please refer to our general response for a wall clock time analysis. In short, MeZO reduces the number of GPU-hours needed to train large models, leading to a 2x GPU-hour reduction on a 30B model compared to Adam fine-tuning.
**How does MeZO perform with full training data?**
We choose the fixed number of training example setting because of compute limitations. Some datasets have millions of examples, so fine-tuning on the entire dataset can be very expensive for the model scale we are studying. We will include more ablations showing how MeZO performance changes with the dataset size in the next revision.
---
Rebuttal Comment 1.1:
Comment: Thank you. | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable feedback. We address some shared questions here.
**When can MeZO succeed in fine-tuning? What losses satisfy Assumption 1 (i.e., the Hessian has a low effective rank)? Why is a prompt necessary for MeZO to be able to fine-tune the model? Can you verify the dependence of MeZO convergence rate on the effective rank?**
Our theory in Section 4 provides a sufficient (but not necessary) condition for MeZO to succeed: the Hessian should exhibit a small local effective rank during fine-tuning (Assumption 1). We hypothesize that the Hessian of the pre-training objective likely exhibits low effective rank, because the model has been trained for many steps during pre-training. Ample evidence suggests that training for many steps can make the Hessian have a low effective rank in the case of vision (see lines 222-228). Adding a prompt turns the downstream task into next-word prediction [1] (or masked language modeling [2]). So, the Hessian of the fine-tuning objective when using a prompt (similar to pre-training) likely exhibits a small effective rank like the pre-training one [1, 3]. There is additional empirical evidence that the Hessian of a language model during fine-tuning likely has low rank [4].
Additionally, per reviewer XvSx’s suggestion, we ran simulations in a simple setting to verify the dependence of MeZO convergence rate on the effective rank and reported the results in the attached PDF. We observed that the slowdown of the convergence scales with the effective rank. We will include these experiments in the next revision of the paper.
[1] Nikunj Saunshi, Sadhika Malladi, Sanjeev Arora. A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks. ICLR 2021.
[2] Tianyu Gao, Adam Fisch, Danqi Chen. Making Pre-trained Language Models Better Few-shot Learners. ACL 2021.
[3] Sadhika Malladi, Alexander Wettig, Dingli Yu, Danqi Chen, Sanjeev Arora. A Kernel-Based View of Language Model Fine-Tuning. ICML 2023.
[4] Armen Aghajanyan, Sonal Gupta, and Luke Zettlemoyer. Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. ACL 2021.
**What is the wall-clock efficiency of MeZO compared to standard training with backpropagation?**
The attached PDF with this rebuttal shows the wall-clock efficiency of MeZO compared to fine-tuning. MeZO takes more steps to achieve similar performance than fine-tuning, but requires much less wall-clock time per step and requires fewer GPUs. The gains are more prominent for larger models, which are the ones that require more memory to fine-tune. For example, for a 30B model, we show that MeZO enjoys a 7.74x per-step speed up and a 2x total GPU-hour reduction compared to fine-tuning with Adam. We will include these results in the next revision of the paper.
Pdf: /pdf/68abacc31f632a9fde8c1b86a1ddbaeb848df590.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work introduced a memory-efficient zeroth-order optimizer that can fine-tune large language models with the same memory footprint as inference, using only forward passes and gradient estimates. Comprehensive experiments across model types, scales, and tasks, showing that MeZO outperforms zero-shot, in-context learning, and linear probing, and achieves comparable performance to fine-tuning with backpropagation, while reducing memory cost by up to 12 times. Non-differentiable objectives that MeZO can optimize, such as accuracy or F1 score, which are usually not amenable to backpropagation. Theoretical insights that explain why MeZO can optimize LMs with billions of parameters, despite classical zeroth-order analyses suggesting otherwise.
Strengths: 1. It proposes a novel and memory-efficient method to fine-tune large language models without backpropagation, which can save up to 12x memory compared to standard methods.
2. It demonstrates that the proposed method can achieve comparable or superior performance to fine-tuning with backpropagation across various tasks, models, and tuning techniques.
3. It shows that the proposed method can optimize non-differentiable objectives, such as accuracy or F1 score, which are useful for many applications.
4. It provides theoretical insights on why the proposed method can overcome the classical limitations of zeroth-order optimization and leverage the benefits of pre-training and task prompts.
Weaknesses: 1. While the experiments demonstrate good performance on the language understanding tasks, it is unknown whether the method is also applicable to the generation tasks.
2. It relies on the assumption of low effective rank of the Hessian matrix, which may not hold for all loss functions. It would be great to have a discussion about the scope of application for the proposed method.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: How is the training stability of MeZO? Does it an advantage over the backpropagation-based methods, especially for the large models?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: 1. It would be better to have the experiment results on the generation tasks, e.g. translation and summarization.
2. I suggest the authors to have a discussion about the scope of application for the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Does MeZO work for generation tasks?**
Table 1 and Figure 1 show the performance of MeZO on DROP and SQuAD, which are two question-answering tasks that are formatted as generation tasks in our experiments. For each task, given the question, we train the model to directly generate the answer text (please see our Table 12 for details). We leave the study of more generation tasks like summarization and translation to future work.
**When does the assumption of the low effective rank of the Hessian hold?**
Please refer to our general response. We hypothesize that when using a good prompt, the Hessian of the downstream objective likely exhibits a low effective rank.
**How stable is MeZO training? Does it have an advantage over backpropagation?**
MeZO is not very sensitive to hyperparameter choices. As shown in Tables 13 (RoBERTa-large) and 14 (OPT), we restrict the grid searches to a very narrow range of hyperparameters and often test MeZO with fewer configurations than we use to test fine-tuning with backpropagation. Also, as a gradient estimate, MeZO avoids well-known issues with backpropagation such as vanishing and exploding gradients, though these rarely occur when training networks with residual connections. MeZO is unstable in other ways, like if $\epsilon$ must be set very small. However, in practice, we find that MeZO succeeds with a relatively large $\epsilon$ and reduces the loss consistently over the course of training.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my question. | null | null | null | null | null | null |
Bounded rationality in structured density estimation | Accept (poster) | Summary: The author built a new model to explain the human mental model of density estimation given sequentially observed data points. The model consists of a “Rational component” that generalizes the Chinese restaurant process. And an “Aleatoric component” that adds some error term. The model is fitted on real experimental data and showed superior performance compared to baseline.
Strengths: The real human experimental data application looks interesting and novel.
Weaknesses: There are some parts of the paper that are unclear to me. See the questions section below.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Experiment:
I wonder how much of the over-estimation of cluster sizes is due to the small size of samples we generated? Presumably, if the ground truth number of clusters is huge, the human being will instead under-estimate the number of clusters?
Rational component:
It doesn’t seem too clear to me why is the process “economical”. Could you clarify this? My best understanding of the author’s argument is that when the process is exchangeable like a CRP, it involves memorizing all the points which is cognitively implausible. But I think as long as we are doing bayesian inference and getting posterior, it implies memorizing even if things aren’t exchangeable. Another interpretation is that the author is suggesting that participants are not doing a fully bayesian posterior estimation, but some step-wise procedure.
I wonder if it would be useful to give some discussion of comparing economical ICP with say pitman-yor process? (e.g. some more literature review of extensions of CRP?)
Aleatoric component:
In general, I am a bit confused by this part. Is it just a technical point to make MLE feasible, or is it trying to model humans making mistakes?
Line 194: It isn’t clear to me why “the dimensionality of the report is variable” is a challenge that needed to be solved with an aleatoric component. Could you clarify this? I thought nonparametric Bayes prior like CRP already adapts to variable dimensions well. Are you saying that the aleatoric component is definitely needed as otherwise the model cannot be fit at all? Or just that it makes things more numerically stable or realistic?
Formula (5), seems a bit too simplified, is it computationally trickier to set something that gives probability based on how close K_hat is to K_i? I understand that the author suggests that we can pick whatever slack functions but I feel this particular choice somehow seems too naive.
Figure 5(a): It's unclear to me the major differences between economic vs exchangeable methods?
General:
I wonder if there should be something that models human forgetting previous examples and upweighting examples near the end. As far as I can tell that's not in the current framework.
Typo:
Line 148: n_t,k number of “clusters” assigned to cluster k. clusters should be samples?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors discussed about the limitations of their work in the last paragraph. I wonder if they can provide more discussions on the possible future work to address these limitations?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her insightful feedback and questions.
Reply to ***How much of the overestimation of cluster sizes is due to the small size of samples?***
Unlikely. If the overestimation of # clusters were due to a small sample size, then we would've observed a lower # clusters in the 70-sample vs 10-sample conditions. This is not the case as we have shown in Figure 1D: the reported # clusters is higher in the 70-sample condition.
Reply to ***If the ground truth number of clusters is huge, the human being will instead underestimate the number of clusters?***
Figure 1D suggests so: humans underestimate the number of clusters when the true number of clusters is 4. Our model also predicts so, because new clusters are less likely to form when $K_t$ is already large.
Reply to ***Why is the model economical?***
We thank the R for providing their interpretation of our model. The model is named economical because it is more conservative in introducing new clusters than the conventional CRP. This makes the cluster assignment prior order-dependent, as illustrated in Appendix B.1.1. We assume that this constraint is mainly due to limited memory capacity.
As the R pointed out, we do not perform full Bayesian posterior estimation. However, this feature is shared between the economical ICP and exchangeable ICP, so this is not the main reason for calling the model we proposed as being economical.
Reply to ***Comparing economical ICP with other extensions of CRP***
The suggested Pitman-Yor process (PYP) has a discounting parameter to control how fast new clusters are added. At each time $t$,
$$
P(z_{t+1}=k|\mathbb{z}_{t}) := \frac{\alpha + n_k b}{\alpha+t}, \quad k = K_t + 1
$$
or
$$
P(z_{t+1}=k|\mathbb{z}_{t}) := \frac{t - b}{\alpha+t}, \quad k \le K_t
$$
To slow down cluster creation, we must make $b<0$. As $K_t$ grows, the probability of expansion becomes negative, posing a more challenging constrained optimization problem. The economical ICP modulates only the expansion rate through a monotonic decreasing function of $K_t$, so it is more flexible, interpretable, and easier to optimize.
We are also aware of other priors, such as the sticky CRP [*] used for memory updating [**]. However, we did not find this parametrization to improve the model fit to data in our earlier experiments. We will discuss the PYP, the sticky CRP, and other CRP-like variants in a revision.
[*] Fox E B, Sudderth E B, Jordan M I, et al. A sticky HDP-HMM with application to speaker diarization. The Annals of Applied Statistics, 2011
[**] Gershman S J, Radulescu A, Norman K A, et al. Statistical computations underlying the dynamics of memory updating. PLoS computational biology, 2014
Reply to ***Purpose of the aleatoric component***
The aleatoric component can be regarded as modeling human mistakes. Note that cognitive and behavioral noises are observed in many cognitive science studies, so the including the noise component is a standard procedure in modeling. See the ***Intuition and motivation for the aleatoric component (8bgA, ziH7)*** section in our general response for a detailed reply. We are happy to provide further clarification if the R could elaborate on their confusion.
Reply to ***Variable dimensionality***
We clarify that the challenge is not how to produce variable-dimensional predictions, but rather how to measure the likelihood of the model when it produces such predictions. For each trial, different simulations of the DEF produce different numbers of clusters, but the reported cluster parameters have fixed dimensionality for each trial. To our knowledge, modeling variable-dimensional data is not well explored in cognitive science, so we regard this as a challenge for modeling.
Reply to ***Is the aleatoric component definitely needed?***
The aleatoric component is definitely needed, because the rational component alone could not capture the distribution of the reported data. Please see our general response for a detailed reply.
Reply to ***Equation (5) may be too simple***
One could also choose other conditional distributions over K for (5); however, we want to encourage the expected number of correctly predicted K to be high, rather than to encourage e.g. the expected $\ell-1$ or $\ell-2$ errors to be small, such as would result from a Laplace- or Gaussian-like slack distribution.
***Difference between economical and exchangeable in Figure 5(a)***
This panel only provides example predictions from the two models. The major difference is in the relative weights of the clusters. The economical model has a distortion on the reported weights, and the predicted weight values are pulled to be closer to each other (more homogeneous).
***Modeling human forgetting previous examples and upweighting examples near the end***
These effects are known as recency effects in memory literature. We tested both the recency and primacy effects (up-weighting very first examples) by adding a weighting function (decaying for the primacy effect, increasing for the recency effect) to the cluster assignments. However, we did not find this to improve the model fit. We also tried to directly look for traits of primacy and recency effects in the data but did not find any significant results.
***More discussions on limitations***
We will incorporate more limitations mentioned by the reviewers in a revision. In particular, we will address limitations in our experimental design, such as not asking for intermediate reports from participants. Finding a suitable way to probe for this without inducing additional cognitive effects, such as confirmation bias and forced certainty, is an important future direction. In terms of modelling, we will point out other more advanced versions of CRP, such as the repulsive CRP [\*] that may help capture the non-overlapping clusters reported by participants.
[\*] Quinlan J J, Quintana F A, Page G L. On a class of repulsive mixture models[J]. Test, 2021
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their detailed response! They have addressed my questions, so I'm willing to increase my score to 5.
Regarding to Comparing economical ICP with other extensions of CRP: here is another CRP-like variant: Zhou, D., Gao, Y. and Paninski, L. Disentangled sticky hierarchical dirichlet process hidden markov model. ECML PKDD 2020. | Summary: The authors describe a density estimation task with humans wherein participants were asked to identify parameters of an unknown distribution from presented data. While participants did a better job of estimating the overall density with more samples, the authors observed a large error in the reported number of clusters in the underlying distributions (and that increased with sample size but typically fell in the range 2-3). The participants also tended to estimate the clusters as non-overlapping. They modeled this process with a non-parametric mixture model. This model consists of a “rational” component and an “aleatoric” component. The rational component uses for cluster assignments an extension of the Dirichlet process that reduces the relative probability of adding new clusters as new points are added and modulates the relative probability of assignment to an existing cluster with a form of divisive normalization. Unlike the regular Dirichlet process, this process is not in general exchangeable, a property the authors argue is cognitively implausible and unnecessary. The aleatoric component adds “structured noise” in order to compute an approximation to the marginal likelihood. They compare their model against both an exchangeable version and a baseline “batch” learning prior.
Strengths: This is an interesting paper about the important problem of how people form internal models from observations. The authors use an interesting approach with a non-parametric prior and provide a fitting routine that uses state-of-the-art tools for efficiency. The experiments are clearly described and the model is (mostly) well presented. The evidence that humans do not form models based on exchangeability is an interesting result.
Weaknesses: Overall, the presentation is excellent, but the description and motivation of the aleatoric component is lacking, to me. Adding some intuition about why you made the choices you made for the model would be helpful. The “slack” probability seems particularly arbitrary to me, as does the noise added in equation 6. Why are these necessary? You mention in the appendix that “the DEF must place non-zero likelihood to mixture models with all possible numbers of clusters” which makes sense because many potential clusterings could result in the same reported number of clusters, but then you essentially remove from the estimation (equation 7) any sample in which the number of clusters does not match what is reported (line 208). The “slack” probability seems to carry all of the weight of this possibility. It is likely I’ve misunderstood something, but I think this section could be clearer.
The results from Figure 1 raise some interesting questions about the need for this particular model. I can’t tell since they are bar plots, but it makes me wonder how consistent the cluster number estimate is and what its dependence is on the number of data points. Moreover, can the data support a simpler psychological heuristic that assigns a number of clusters based on the number of observed samples, particularly given that subjects tend to estimate non-overlapping clusters? It is not clear to me whether the baseline model accounts for this possibility. I’m imagining a heuristic where people divide the screen into chunks and assign clusters that way, making finer divisions with more and more data points. Is it clear that this is not what is happening? Perhaps this is an ignorant question, but it seems to me that the estimation should be driven by some important visual priors.
Minor:
Line 129-130: detailed mechanics are [not?] directly obvious…
Line 150: For r > 0, new clusters are… (not r< 0).
Line 206: There [are] \hat{K}^i clusters…
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: What is the spread of data in Figure 1, particularly panels B and D? Are participants consistent in the number of clusters assigned? Is there a potentially simple relationship between number of clusters and sample size? Are there simpler null models to compare against that make some relatively add hoc cluster assignments of this nature? Maybe the baseline model behaves this way, but it is not clear to me that that is the case.
K_max from the aleatoric component is across participants, which means the posterior takes into account observations from other participants. Is this a practical concern? Can you replace K_max with some large but not unwieldy number and get similar results (I assume so)?
This is beyond the scope, I think, but is there a way to get running estimates of the distribution over the course of the experiment? Your experiments don’t directly have this information, but can you get a clue to how people update models of distributions by comparing the experiments with different sample sizes? This seems a natural question given the result that non-exchangeability seems to be important in modeling the human behavior.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors do not discuss societal impact nor directly discuss limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for helpful comments and clarifying questions.
There are many important choices in the aleatoric component, so we provide an elaborated explanation here. We will revise the manuscript and the supplemental material to add these descriptions.
***Reply to "Intuition and motivation for the aleatoric component."***
It is a cognitively plausible component that helps us fit the model. We regard the rational component as modeling the density estimation process while stimuli are presented, and the aleatoric component as modeling mostly the behavioral noise while participants report the cluster parameter. Please see ***Intuition and motivation for the aleatoric component*** section in our general response for a detailed reply.
***Reply to "Why is the chosen slack distribution (Eqn 5) necessary?"***
The noise model on $K$ must assign none zero probabilities to all possible model sizes $K$ within a reasonable range, while encouraging the inferred $K^i$ from the ICP to be sampled most of the time, as it depends on the stimuli. This thus motivates the discrete K slack distribution in (5). Crucially, the simulations with incorrect $K$ are not simply removed but instead contribute zero conditional likelihood to the summation in (7). Since $M$ is fixed regardless of how many simulations yield the correct K, the ICP must produce as many as possible simulations with the correct $K$ to achieve a high likelihood. Note that the value of $\epsilon$ does not enter the likelihood computation explicitly, but only through sampling of the slack distribution. One could also choose other conditional distributions over K for (5); however, we want to encourage the expected number of correctly predicted K to be high, rather than to encourage e.g. the expected $\ell-1$ or $\ell-2$ errors to be small, such as would result from a Laplace- or Gaussian-like slack distribution.
***Reply to "Why is the chosen cluster parameter distribution (6) necessary?"***
The noise model for cluster parameters must preserve the strong dependences between
1. the cluster parameters and reported K,
2. between the cluster parameters themselves.
The latter comes from the permutation invariance of the clusters in defining the density function. These considerations motivated the seemingly complicated noise model and cluster adjustment functions in (6). Without the cluster adjustment function, the cluster parameters become decoupled from the slacked $\hat{K}^i$. The summation over the permutations ensures that the likelihood is insensitive to the order in which the distribution parameters appear in the ordered vector representation.
***Reply to "Other heuristics where the number of clusters depends on the sample size"***
It is possible that humans build a different prior for each sample size. However, in experimental settings where the sample size varies across trials and is thus unknown before the sequential presentation of samples (e.g. in Experiments 1 & 3), this heuristic would be incompatible with online learning. That is, agents using such heuristics need to memorize all the samples before inference, which is less cognitively plausible. For Experiment 1, we implemented a sample size-dependent prior (Poisson given each sample size) under the batch learning ICP. This gave an insignificant improvement of $\Delta AIC=5$ (p=0.59) to batch baseline, but still far worse than the exchangeable or economical DEF. In experimental settings where the sample size is fixed (e.g., Experiment 2), this heuristic would reduce to the batch baseline model we tested in the paper and found to be inferior to the exchangeable DEF and the economical DEF.
***Reply to "Boundary-based heuristic model"***
We thank the R for the suggested intuitive model, but we do not fully understand the suggested procedure of “making finer divisions with more and more data points”. It is possible that participants store not the clusters' means and widths, but the boundaries. However, a sensible boundary must rely on memorizing all the previous stimuli so that the divider does not cross densely populated regions. This renders this procedure less cognitively plausible. Memorizing densely populated regions is then tantamount to having a cluster-based representation, such as the GMM-based ICP we use.
***Reply to "What is the spread of data in Figure 1, particularly panels B and D?"***
To visualize the spread, we replot Figure 1B and 1D in response PDF (Panels A & B), with individual participants’ data points denoted by grey dots linked by grey lines.
***Reply to "Are there simpler null models to compare against?"***
Please see the general response. We also ran two additional batch ICPs using priors suggested by other reviewers. These are still significantly worse than the sequential DEFs.
***Reply to "K_max from the aleatoric component is across participants"***
This is not a practical concern, because the amount of information provided by this $K_{max}$ is very small: only a tiny fraction of the reported $K$ reach $K_{max}$. Even if we increase $K_{max}$, then the K-slack model assigns a nonzero probability ($\epsilon$) to a value that never appears in the dataset, so this will lower the likelihood. In this sense, $K_{max}$ is the maximum-likelihood solution of the maximum predicted $K$ for the slack model. Computationally, a $K$ too large induces a much larger number of clusters, which makes the sum in (6) over permutations impractical to compute.
***Reply to "Is there a way to get running estimates of the distribution over the course of the experiment?"***
We considered this during our experimental design and decided not to go for this choice. Asking our participants to report on the fly may create confirmation biases or force them to commit to a distribution that is originally uncertain. Instead, we designed the 10/70 sample-size comparison without an intermediate report to test for the effect of sample size on the reported distribution.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response.
I don’t agree that a boundary based estimated driven by visual priors would require memorization of all previous data. At worst perhaps one needs to memorize extremal points. However, if one permits some degree of error (as seems reasonable in context) then ascribing clusters to approximate visually bounded regions still seems to make some sense. I do agree however, that your model is a good approach to this kind of approximate clustering. How and whether it relates to visually driven spatial priors is a question that would be unfair to ask you to address, even though interesting.
Thank you for the new plots and the new analyses with other models. I am adjusting my score. | Summary: This paper presents a visual density estimation task for human subjects based on sequential data from gaussian mixture models, complete with experimental data analyzed through a bounded-rational model of behavior. The paper reports data on the quality of the density reported by subjects, claiming that it roughly matches the true distribution's first three moments while highlighting some systematic mismatch in the number of reported mixture components. A variant of the task is performed using a different modality (numerical data instead of visual stimuli). After the task and the data is presented, the paper develops a bounded rational model for behavior. This model is composed by a generalization of the chinese restaurant process plus a sophisticated noise component. The main difference from the CRP is that this model implements in general an "economical" density estimation process where the likelihood of adding new clusters decreases faster than in the CRP. The behavioral model comes with an inference scheme implemented in pytorch, which allows for gpu acceleration. This model is applied to the data, showing that the "economical" version explains the data better than simpler alternatives (regular CRP or batch learning where cluster assignment is not sequential). This is argued to support the idea that subjects are affected by constraints on memory capacity.
-----
(Scores edited following the author's rebuttal)
Strengths: - The paper contains a large amount of novel work, including a behavioral task, a bounded rational observer, and advanced techniques for inferring the parameters of the observer and comparing it with alternatives explanations of empirical data.
- The task is ambitious in that it has a high-dimensional report.
- The bounded rational observer is nontrivial but well explained, despite the limited space. From a statistical standpoint, this model and its efficient implementation are valuable.
Weaknesses: 1. The statements that the subjects's density estimates are reasonably good do not seem sufficiently supported by the data. Line 86: "the reported densities tracked the first three moments reasonably well", but figure 1 does not show the data (the values of the true and reported moments), only the estimated correlation between them. Line 105: the density estimation quality is "reasonably good" in experiment 3, but figure 7 does not show any data on this - it only shows that the economical DEF reproduces certain patterns in subject behavior.
2. As pointed out in the paper (lines 291-292, "the prediction errors on the number of clusters and cluster variance are still large"), the economical internal construct prior (ICP) model is simply not very good at describing the empirical behavioral data. This is not necessarily a problem by itself, but it makes it difficult to draw any conclusions about human behavior from the application of the model to this dataset. In particular, the claim that "We provide key experimental and modeling evidence that humans may employ an online and yet finite (in expectation) model" does not seem adequately supported: if the economical ICP does not describe well subject behavior, it does not matter that it does comparatively better than the CRP or batch learning.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can you provide more evidence that the quality of the density estimation reports is "reasonable", despite the clear systematic errors in the number of components?
2. Can you clarify why the model provides experimental insight into human behavior, if it does not capture basic features of the behavior like mean and variance of the components? Moreover, how is this mismatch compatible the claim that "The DEF with the economical ICP captures all of the behavioral patterns reported in Section 2, including the inconsistent number of clusters and the low overlap ratio between adjacent clusters"? Figure 5A (referenced in this passage) only shows three examples; I can't find any quantitative or statistical support for this claim. Alternatively, the claims about the insights derived from the application of this model to human behavior should be significantly reduced.
3. Figure captions should describe what's in the figure, not (just) provide comments. For instance, the caption of Figure 1E says "adjacent clusters in the reported distributions did not overlap much", but does not say what is being plotted there. How is the overlap computed? The figures would be clearer if this pattern/tendency in the captions was avoided.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does highlight limitations in the results, although (as discussed above) other passages claim conclusions that do not seem supported by the data. Potential negative societal impact is not a concern here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's suggestions on providing more details about human data and model fits, and more quantitative support for qualitative statements. These suggestions have inspired us to dig deeper into our data and model fits, which has yielded richer results (see response PDF) that we believe are more convincing. Please see detailed responses below as well as the general response to all Rs.
***Reply to "More evidence that the quality of the density estimation reports is reasonable"***
In our previous manuscript, we showed that subjects’ density estimates roughly match the sample distributions in their moments (i.e., mean, sd, and skewness, Figure 1C for Experiment 1) Following the R’s suggestions, we have enhanced these plots of moments for Experiment 1 (panels D&E in response PDF) and added similar plots for Experiment 2 (panel F). In these new plots, we included three layers of data:
1. the distribution of responses of individual trials (collapsed across subjects), plotted as the grey heatmap in the background.
2. average responses and individual differences across subjects, plotted as the 5 black dots and their error bars (standard error). To compute the 5 average responses, the values of the sample mean (sd, skewness) are evenly divided into 5 bins, and the trials in each bin are averaged, first separately for each subject and then across subjects.
3. the regression lines as before.
Along with these plots of reported moments, we also present the fits of our economical DEF (sub-panels highlighted by red dots and lines). We see that the economical DEF captured not only the report of the sample mean (first row of panels D–F) but also the patterned errors in the report of sample sd and skewness (second and third rows of panels D–F). The PDF includes a few more comparisons between human data and model fits where we visualize much richer patterns in the high-dimensional human data and show how well our model fits these patterns. All of these results suggest a reasonably good fit of the economical DEF to behavioral data.
***Reply to "The estimated model does not capture data well"***
Apologies for the misunderstanding caused by the original manuscript. To visualize that our model is capable of capturing various aspects of human data, we provided a more thorough illustration of model behavior in the response PDF. The fitted model behaves like the human subjects in both (1) the moments of the reported marginal density and (2) the reported cluster properties. Please see the **How well does the model produce human data? (Ds9y, Xmkb, ti72)** section in the general response for the detailed discussion.
***Reply to"What we can learn from the model if it does not capture the mean and sd."***
The plots in the response PDF show that the model can capture the mean, sd, and other measures in the human data. Please see the replies above as well as the general response for details.
***Reply to "Figure 5A only shows three examples and a lack of quantitative and statistical support"***
Figure 5A shows example trials, and the figures showing the overlap ratio and inconsistent report are in Figure 7 of Appendix B.4. Sorry for the slightly confusing cross-referencing.
***Reply to "Overclaims that are not well supported"***
We will tone down the overly strong statements at the suggested places and elsewhere. We will make sure there are no other statements without data or statistical support.
***Reply to "Figure caption styles and computation of the overlap ratio"***
We provided an intuitive visualization of the overlap ratio in the figure, but agree that we should be more explicit about the definition. Overlap ratio is defined as the percentage of a cluster’s area being covered by any other clusters averaged across all clusters in the report. Such computation of overlap ratio has the advantage of not being affected by the number of clusters. We also tried other measurement of the overlap ratio (e.g., the area under the second highest cluster density curve), which gave similar results. The overlap between clusters is low. We will add details of the computation of the overlap ratio to the revised text.
Adding comments is common in figure captions in cognitive science papers. We will make sure that the comments are clear and faithful to the content, and that all variables/features are clearly defined in the main text. We appreciate the R’s comments.
---
Rebuttal Comment 1.1:
Comment: Thank you for replying to my questions. The additional analyses/plots go a long way towards addressing my concerns - especially the central one, that there was not enough evidence that the DEF with economical ICP captures well subject behavior. I have increased my scores accordingly. | Summary: The authors consider the question: how do humans estimate probability distributions? They study this by performing three experiments where human subjects, where they ask participants to recover Gaussian mixture models after seeing IID samples. They find that while subjects do seem to get closer to the true mixture model as they see more samples, they consistently seem to report about three different components regardless of the ground truth number of components. They speculate that this is due to humans having limited memory.
They then propose a human model consisting of a rational component that performs approximate Bayesian inference which generalizes the standard Chinese Restaurant Process (CRP) and an _aleatoric_ component where humans pick the number of components and then merges and splits clusters until the desired number of components is reached (with some noise). They derive an efficient estimator for the likelihood under this model using Monte Carlo simulation. They then show that their model fits their experimental data much better than the standard CRP. The better performance of their model suggests that it does not seem necessary to keep the human prior exchangeable.
Strengths: * The paper is generally clearly written, and the authors do a good job of motivating their problem and describing their results.
* The authors' structured online density estimation task is clean and straightforward.
* The key experimental observation (that the human subjects seem to consistently estimate around 2 or 3 clusters, regardless of ground truth) is very clearly presented and clearly of great interest to this subfield of human modeling.
* The authors two additional experiments validating the key experimental observation, with different distributions and different domains.
* Their proposed aleatoric component follows naturally from their key observation, and cleanly captures the inconsistent number of clusters reported by human subjects.
* The authors experimentally validate that their proposed model generally performs better than the standard CRP (as well a batch learning baseline) for all three experiments, using five different criteria: AIC, #clusters correct, negative log likelihood, the error in means, and the error in log variances.
* The authors give the fitted parameters and discuss how to interpret them, which I found quite interesting.
Weaknesses: * All three experiments featured relatively simple Gaussian mixtures -- for example, experiment 1 has 1-4 clusters, experiment 2 has 1-3 clusters, and experiment 3 looks at _unimodal_ Gaussian mixtures/distributions. As a result, it seems hard to distinguish the interpretation where human subjects have limited memory and are using the proposed ICP from the interpretations where human subjects just happen to estimate 2--3 clusters across the board, or where they consistently tend to overestimate the number of clusters for other, unrelated reasons.
* Similarly, the proposed ICP seems quite convoluted (e.g. the authors' likelihood estimator requires Monte Carlo simulation) and I'd be surprised if human subjects were actually using it.
* Despite the complexity of the proposed ICP and the relative simplicity of the distributions being studied, the error bars for fitting human predictions are still relatively large (as shown in Figure 4), suggesting that it does not accurately capture the underlying human behavior.
* I found figure 3 confusing; in particular, while the caption was quite helpful to my understanding, I still don't think I understand the rational or aleatoric component parts of the figure, and I did not find them helpful when I was first reading the paper.
* Similarly, I found the discussion of how the aleatoric and rational components were composed to be initially confusing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. It seems like the combination of the rational and aleatoric components are quite complicated -- have you considered simpler models? For example, instead of doing online expansion, perhaps the ICP just has a fixed prior distribution over cluster count.
2. Similarly, given the complexity of the ICP, I'm curious how the authors interpret the fact that human subjects are able to (apparently) use it.
3. As mentioned in the weaknesses section, I'm curious about how these results generalize to more complicated inference tasks. I understand if the experiments have not been done, but what do the authors expect the results to be if you looked at experiments with more clusters or in more realistic settings?
4. I would've appreciated more discussion about the actual implementation of the estimator -- for example, how reliable is the log-likelihood estimate, and how sensitive is it to the number of MC simulations?
5. Similarly, I would've appreciated more analysis of when the predictions of their proposed ICP fall short of reality (as opposed to only cases where the number of clusters agree with human report).
6. As this is a relatively theoretically oriented paper, this is not strictly necessary. However, I wonder if the authors have seen the tendency for human subjects to pick a fixed number of clusters in other contexts, perhaps due to computational constraints. Do you expect this to have implications in practice? Are there more realistic contexts where this tendency causes systematic deviation from "normative" behavior?
In addition, there is the following typo in the paper:
* Line 205-6: ... until **there** $\hat K^i$ clusters -> ... until **there are** $\hat K^i$ clusters
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: * I commend the authors for explicitly highlighting the relatively poor fit (albeit significantly better than prior models) of their proposed ICP on their experiments. However, I would've appreciated more discussion about why this might happen, and what next steps could be taken to address this issue.
* I think the authors should include more discussion of the limitations and weaknesses I mentioned in the weaknesses section above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and helpful feedback.
***Reply to "It seems hard to rule out the interpretations where human subjects just estimate 2--3 clusters across the board"***
Behaviorally, we see that the distribution of reported $K$ varies across the number of samples and distribution types (Panels B & J in PDF), so participants are not reporting $K$ randomly. Besides the batch baseline model mentioned in our general response, we also implemented another batch ICP in which the prior over $K$ is a discrete distribution over $K=2$ and $K=3$ (note that the aleatoric component still permits more values of predicted $K$). Compared to the Poisson prior in the batch baseline model, the new prior had a small but insignificant improvement (mean $\Delta$AIC $\approx 20.0$, $p=0.61$), still far worse than the CRP-GMM ICP.
***Reply to "ICP seems too complicated to be implemented by the brain"***
This question may arise from a common confusion between the participant’s view and the experimenter’s view of the ICP. In the former, the participant always uses single particle trajectory $z_1:t$ for cluster assignment. However, these assignments are latent variables hidden away from the experimenter (us), so the experimenter needs to consider and average out all possible latent trajectories that could be generated in the participant’s mind. Only the experimenter needs to average (or marginalize) out this latent variable for modeling purposes. Note that Gershman & Niv in [25] also used a large number of simulations to marginalize out latent variables, although the likelihood function used there is more restricted.
***Reply to "Large error bars of model predictions"***
Please see the general response, ***How well does the model produce human data? (Ds9y, Xmkb, ti72)***
***Reply to "Combination of the rational and aleatoric components"***
We will further clarify the functions of the two components in a revision. In short, the rational component is the approximate Bayesian inverse of an ICP, implemented as a particle filter. This component produces raw predictions of the reported distribution parameters, as in most other Bayesian models for human behavior. Now, if there were only a single Gaussian in the data and in the participant’s report, then we can add simple noise models to create a valid likelihood function, such as the Dirichlet and isotropic Gaussians in Eqn (6). However, the number of clusters in the report and prediction varies in our setup, so we need to introduce a more sophisticated and structured noise model, the aleatoric component. Please see our general response for intuitions on the aleatoric component.
***Reply to "Generalisation to realistic scenarios with more clusters"***
One key prediction of the decaying $\alpha_t$ with $K_t$ is that humans are less likely to create a new cluster when there are already many clusters, but with overwhelming evidence from the observations, a new cluster can still be created if, for example, many clustered samples appear one next to each other and far away from existing clusters. In addition, if memorizing the clusters takes up memory capacity, then we should expect the $\alpha_t$ to decrease faster if the dimensionality of the stimulus $x_t$ is higher since each cluster now requires multiple times more space to store. We will further verify these predictions in future work.
***Reply to "Quality of the parameter estimation procedure"***
Detailed information about the fitting algorithm is presented in Appendix B.3, where we quantify the variance and bias of the estimated log-likelihood as a function of # MC simulations in Figure 10, and we show the convergence of the parameters in Figure 11. Overall, the amount of variance and bias is very small at the optimal parameters found, although convergence is less ideal for Experiment 3. In addition, we present parameter recovery results in panel K of the response PDF.
***Reply to "Analysis of when the model fails (e.g. to predict the number of reported clusters)"***
This is a very interesting suggestion. Note that the correlation between the statistics of reported and true distributions in Figure 1 does not exclude predictions with an unmatched number of clusters. It is also expected that the distributions would be more smooth if the predicted $K$ is less than the reported $K$, and vice versa since the cluster width is small *a priori* with high confidence. However, the behavior of the predicted cluster parameters is unspecified and unconstrained by data. Perhaps a less stringent aleatoric component may reveal more interesting predictions for an unmatched number of clusters, but any effects would be tightly linked to the inductive biases of the specific aleatoric component.
***Reply to "Tendency of humans to pick a fixed number of clusters"***
Please see our response to a similar question raised by **Ds9y—"Implication of the results to more naturalistic scenarios”**. Ravignani et al (2016) [*] found that after several rounds of laboratory music evolutions, participants’ reported distribution of rhythms became much more clustered than the original uniform distribution. According to Figure 2 in their paper, the number of clusters that emerged from participants’ reports seems to be close to 3. We also speculate that the overestimation of the number of clusters is closely related to the perception of illusory causal relationships in human superstitious thinking [**].
[*] Ravignani, A., Delgado, T., & Kirby, S. (2016). Musical evolution in the lab exhibits rhythmic universals. Nature Human Behaviour, 1: 0007. doi:10.1038/s41562-016-0007.
[**] Matute, H., Blanco, F., Yarritu, I., Díaz-Lago, M., Vadillo, M. A., & Barberia, I. (2015). Illusions of causality: How they bias our everyday thinking and how they could be reduced. Frontiers in Psychology, 6, 1–14.
---
Rebuttal Comment 1.1:
Title: Thanks for your detailed response.
Comment: Thanks for the authors' detailed responses. As you have addressed all of my concerns, I am raising my score to an 8 and my confidence to 4. | Rebuttal 1:
Rebuttal: We thank all reviewers for their careful read of the paper and helpful feedback. We are glad that all five reviewers gave fair and comprehensive summaries, indicating that the paper is mostly well written, as also stated by four Rs (Ds9y, Xmkb, ti72 & 8bgA). **All** Rs acknowledged that this work is interesting or novel. Three Rs (**Ds9y**, **Xmkb** & **ti72**) also praised the modelling work as clean, rigorous, non-trivial, and valuable.
Meanwhile, most of the concerns by the Rs can be resolvable with more modelling findings (2 alternative priors for batch ICP), and richer comparisons between human behaviors and model predictions (11 figures in PDF), which we provide in this rebuttal and will incorporate in a revision. We believe these will resolve most of the weaknesses and questions of the Rs and improve the quality of the manuscript.
***How well does the model produce human data? (Ds9y, Xmkb, ti72)***
We initially wrote “the prediction errors on the number of clusters and cluster variance are still large” in the last paragraph as one of the limitations. We now found this to be overly pessimistic. More comprehensive comparisons between the human data and model predictions (see response PDF) indicate:
1. Our economical DEF model produces human patterns in many aspects, not only in the measures the loss function was optimized for (i.e., the number, mean, sd, and weight of clusters), but also in the moments of the whole distribution (mean, sd, and skewness) and the co-variance of different measures (e.g., how the reported cluster sd decreases with cluster number and eccentricity, but increases with cluster weight).
2. The economical DEF performs roughly as well as the participants themselves in predicting the number of clusters $K$. In experiment 2, there are repeated trials with identical stimuli ($\mathbf{x}_T$). Using these trials, we show in Panel C of the PDF that in repeated trials participants reported a different number of clusters ($K$) just below 50% of the time. Similarly, our model could predict $K$ with accuracy around 0.5, very close to the participants themselves. The predictive power of the best model is thus limited by the stochasticity of human behavior.
***Can the data be explained by simpler models? (Ds9y, Xmkb, 8bgA)***
The complexity of our economical DEF as a model of our participants is not as high as it appears to be. What we assume for the participant is that they only use a *single* latent particle *at each time step* to assign the new sample to a cluster. The large number of Monte Carlo simulations is only needed for us as experimenters to fit the model, to marginalise out the latent cluster assignments in the participant’s mind and unobservable to us.
Conceptually, the CRP-GMM ICP is consistent with a simple heuristic. First, whether a new cluster should be introduced depends on a) the prior tendency to add a new cluster, described in Eqn 1; and b) how well the incoming sample is captured by the current distribution, measured by the Student’s t distribution in lines 177-178. Second, the Gaussian distribution only requires the mean (location) and variance (width), which the participants specify for each cluster.
To further validate whether the key computations in the CRP-GMM ICP are indeed necessary, we compared 11 simpler models in the paper (Appendix B.4.1). Below are examples that implement a static or less adaptive prior:
1. The batch baseline replaces the sequential prior with a Poisson distribution and changes the structure of the ICP to a static prior;
2. The No Counting Prior ablation replaces the counts of the clusters (core feature of CRP) with the average count;
3. No Distribution Prior ablation removes the conjugate prior over the Gaussian distribution when evaluating the likelihood of the new sample;
All these simpler models (see Appendix B.4.1 for the full list) led to worse fits than CRP-GMM ICP models (the exchangeable DEF and the economical DEF).
***Intuition and motivation for the aleatoric component (8bgA, ziH7)***
The core motivation for having the aleatoric component is to provide a well-defined likelihood function that is also cognitively plausible. Though the rational component induced by the ICP is stochastic and can in theory generate almost all behaviors with nonzero probability, we found empirically that ICP alone cannot practically predict the correct K in all trials for all subjects (i.e., likelihood is near zero). The issue lies in (as 8bgA commented) that subjects may not report exactly the clusters from their inference, due to memory and motor noises. The aleatoric component provides a highly-structured noise model to accommodate such noises (Eqns 5 & 6), which allows our model to better capture human data.
In its technical challenge and solution, the aleatoric component resembles the classic example of drift-diffusion models (DDMs): the behavioral quantities are multi-dimensional, mixed with discrete and continuous variables; to jointly model these quantities, one can first model the discrete distributions, and then construct a conditional distribution for the continuous variables given the discrete. For DDMs, the continuous response time distribution is often conditioned on a discrete correct/wrong label. Similarly, our DEF places a conditional distribution over the discrete $K$. The rest part of the DEF is more complicated than DDMs, because
1. the support of the predicted continuous variables (w, m, sd) depends strongly on the number of predicted clusters, and
2. the permutation invariance of the parameters (w, m, sd) in defining the density function.
Pdf: /pdf/d062a0b964f42204e9f4ef6440a3ec4e6c000942.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This study investigates how humans infer probability distributions from samples by combining experiments and modeling. The main contributions include a careful characterization of the behavioral tendency to overestimate the number of clusters as well as a modeling framework to identify how this behavior can arise from approximate inference. By fitting parameters from a generalized version of the Chinese restaurant process, the authors conclude that behavior can arise from a myriad of factors including strong prior expectations about the sample variance, undercounting samples, as well as a decaying tendency to form new clusters.
Strengths: Human experiments are well-summarized and clearly document a tendency to overcluster samples. By directly asking participants to report the data-generating distribution, the approach provides a direct readout of the participant's biases that are brought to bear when solving complex probabilistic inference problems.
The paper is well-written and easy to read except for the section introducing the modeling framework where I got lost in the notations for a while. But the illustrations were very helpful.
The modeling approach is both rigorous and appropriate for answering the questions tackled by the study.
Weaknesses: I see two main weaknesses.
First is that the insights gained from modeling seem to be quite minimal and specific to the paradigm used here. I'd like to know whether/how the conclusions might apply more broadly to naturalistic scenarios.
Second, given that the model fits are not that great, it is quite possible that there are alternative models not considered here that explain behavior better. A discussion of alternatives is lacking.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Since the fitting procedure relies on approximate inference, I'd like to see a validation of the model fits to show that the method is able to recover the true parameters. Is this already published or included somewhere in the supplementary?
Based on the best-fitting parameters, the authors point out a trade-off between two different effects. A decaying tendency to form new clusters (which should lead to underclustering) and a strong prior that underestimates the variance of the gaussians in the mixture (which should lead to overclustering). Could the trade-off simply be a result of over-parameterization? Again, some sort of validation of the model fitting procedure would help here.
On a related note, it wasn't clear to me how the model explains why humans report more clusters when sample size increases from 10 to 70. Shouldn't the decaying $\alpha$ lead to fewer clusters as a function of sample size?
Given that data exchangeability is not a constraint for human inference, why not validate the model directly by running a new experiment where the sequence of samples is presented in a reverse order? I suppose the model will have concrete predictions for such an experiment.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors are clear about the weakness of the model in capturing aspects of the data. But they should discuss possible alternative approaches. There should be some discussion of the limitations of this paradigm and what it means for investigating structure learning in general.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and constructive suggestions.
***Reply to "Implication of the results to more naturalistic scenarios"***
Our experimental design is an abstraction of many cognitive tasks that require density estimation, or finding statistical patterns from samples. A couple of motivating examples were given in lines 22-24. In addition, human’s ability to learn a probability distribution from samples is demonstrated in many prior works (Ernst & Bank 2002; Hills et al. 2002; Trommershauser et al. 2003; Kording & Wolpert 2004), but how they acquired such distributions are known.
From a broader perspective, the current paper may help address why humans construct a biased perception of the uncertain world. For example, Ravignani et al (2016) [\*] found that, after several rounds of laboratory music evolutions, participants’ reported distribution of rhythms became much more clustered than the original uniform distribution (Figure 2 in their paper). In other words, they found that humans exhibit an over-clustering tendency in the cultural evolution of music, which is similar to our empirical findings. Notably, Ravignani et al did not provide a cognitive model to explain why or how such over-clustering occurs in music evolution, and our DEF approach offers a promising theoretical method to analyze those results. Moreover, the overestimation of clusters we found may be related to the perception of illusory causal relationships in human superstitious thinking [\*\*]. Future work could consider extending our DEF to the construction of high-dimensional uncertainty representation and the emergence of causal connections among different dimensions.
[\*] Ravignani, A., Delgado, T., & Kirby, S. (2016). Musical evolution in the lab exhibits rhythmic universals. Nature Human Behaviour, 1: 0007. doi:10.1038/s41562-016-0007.
[\*\*] Matute, H., Blanco, F., Yarritu, I., Díaz-Lago, M., Vadillo, M. A., & Barberia, I. (2015). Illusions of causality: How they bias our everyday thinking and how they could be reduced. Frontiers in Psychology, 6, 1–14.
***Reply to "Discussion of alternative models"***
Please see the general response. Our previous statement about the model fitting performance was too pessimistic. Further results in the response PDF shows that our best model—the economical DEF captures the human data in many aspects other than cluster number and sd, and approaches the upper limit of test-retest reliability in predicting $K$.
***Reply to "Parameter recovery"***
We ran the suggested parameter recovery experiments and show the results in the response PDF. Given randomly chosen parameters for the full model, we generate 100 sets of synthetic stimuli, reset the parameters to new random values, and then fit the parameters on the synthetic dataset using the procedure described in the main paper. The results show that the recovered parameters are largely consistent with the random initial values (Panel K in response PDF, the average correlation between the source parameters and the fitted parameters is 0.84). We will include parameter recovery in the Appendix.
***Reply to "Overparameterization"***
The opposite effects of decay rate and prior variance offer an explanation for the observed reported distribution on K, but these two parameters do not constitute overparameterization of the model for this given dataset. This is because the prior variance is also fit by the reported cluster variance in the dataset and is thus well-specified. There are only weak correlations between these two fitted parameters (Experiment 1: -0.12, Experiment 2: -0.05, Experiment 3: -0.05). Further, the parameter recovery experiments also indicate that these two parameters are identifiable given the data.
***Reply to "More clusters with sample size despite having a decaying alpha"***
We clarify that the effect of a decaying alpha is to decrease the *tendency* toward creating a new cluster, rather than decreased the cluster count itself. As such, even though $\alpha_t$ becomes small at large sample size $t$, the model size $K_t$ can still increase.
***Reply to "Additional behavioral experiments to test exchangeability"***
The suggested experiment that tests for exchangeability by well-designed stimulus order is an interesting direction that we are currently exploring. The exact quantifiable effects of order-dependence are not directly obvious and may be intertwined with other cognitive processes, such as primacy, recency, and hindsight effects. Various hypotheses can lead to different predictions that are best tested under carefully manipulated stimulus order, opening up a large design space for exploration. We thus defer this to future work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. I am now slightly more confident about my original rating so I increased my confidence score. | null | null | null | null | null | null |
Diffusion Hyperfeatures: Searching Through Time and Space for Semantic Correspondence | Accept (poster) | Summary: The authors propose a method to extract per-pixel feature descriptors from multi-scale and multi-timestep feature maps generated by diffusion models. These descriptors can be utilized for various downstream tasks.
The framework is evaluated on the task of semantic keypoint correspondence, specifically on the SPair-71k real image benchmark. The authors claim that their method achieves superior performance on this benchmark compared to other approaches. Additionally, they demonstrate that their method is flexible and transferable, as the feature aggregation network trained on real image pairs can also be used on synthetic image pairs with unseen objects and compositions.
Overall, the proposed Diffusion Hyperfeatures framework aims to enhance the internal representations of diffusion models and improve their utility in computer vision tasks like semantic keypoint correspondence.
Strengths: This paper discusses a framework called "Diffusion Hyperfeatures" for consolidating feature maps in diffusion models. Here's a substantive assessment of the strengths of the paper across various dimensions:
Originality: The paper introduces a novel approach to consolidate and extract per-pixel descriptors from intermediate feature maps generated by diffusion models. While existing works focus on selecting specific layers and timesteps, this paper proposes a comprehensive framework that considers all intermediate features. This consolidation process through a feature aggregation network is a new concept, which contributes to the originality of the paper.
Quality: The paper appears to be of high quality as it references relevant literature and state-of-the-art techniques in computer vision, such as ConvNets, Vision Transformers, and GANs. The authors also evaluate their proposed framework on the task of semantic keypoint correspondence using real images from the SPair-71k benchmark. The evaluation includes an analysis of different layers and timesteps of diffusion model features, demonstrating a thorough investigation. Additionally, the generalization of Diffusion Hyperfeatures to out-of-domain data is examined by evaluating them on synthetic images generated by the diffusion model.
Clarity: The paper presents its ideas and contributions in a clear manner. It provides a concise overview of the challenges faced with existing methods and describes the proposed Diffusion Hyperfeatures framework step-by-step. The use of terminology and technical language is appropriate for the target audience of researchers in computer vision. However, without the full paper, it is difficult to assess the clarity of the detailed methodology and experimental setup.
Significance: The paper addresses an important problem in computer vision, namely how to effectively utilize the internal representations of diffusion models for downstream tasks. By proposing the Diffusion Hyperfeatures framework, the authors aim to improve the utility of diffusion models beyond image generation. The evaluation results on semantic keypoint correspondence and the demonstration of generalization to synthetic images indicate the potential significance of this approach. If the proposed framework proves to be effective, it could contribute to advancements in various computer vision applications.
Overall, based on the provided text, the paper demonstrates strengths in terms of originality, quality, clarity, and potential significance.
Weaknesses: Based on the provided paper, here are a few potential areas where the paper could benefit from improvement:
Comparison with Existing Methods: While the paper mentions that features from ConvNets, Vision Transformers, and GANs have demonstrated significant capabilities, it would be beneficial to provide a more detailed comparison with existing methods that extract feature descriptors from diffusion models. This would help establish the novelty and superiority of the proposed Diffusion Hyperfeatures framework.
Experimental Evaluation: The paper briefly mentions evaluating the proposed framework on the task of semantic keypoint correspondence using real images from the SPair-71k benchmark. To strengthen the paper, it would be valuable to include a thorough analysis of the experimental results, including quantitative metrics, comparative evaluations with state-of-the-art methods, and possibly visual illustrations or qualitative assessments of the generated keypoint correspondences.
Generalization to Diverse Domains: The paper mentions evaluating Diffusion Hyperfeatures on synthetic images generated by the diffusion model. While this demonstrates some level of generalization, it would be useful to explore the performance and robustness of the framework across a broader range of datasets and domains. This could involve testing the framework on different benchmark datasets and real-world scenarios to validate its effectiveness in diverse settings.
Clear Methodology Description: The provided text offers a high-level overview of the proposed framework, but for a comprehensive evaluation, it is crucial to have a clear and detailed description of the methodologies employed. It would be helpful to include information about the architecture of the feature aggregation network, the specific techniques used for consolidation, and any additional preprocessing steps or modifications made to the diffusion model.
Potential Limitations and Future Directions: The paper could benefit from discussing potential limitations or challenges associated with the proposed framework. Identifying these limitations and suggesting future directions for improvement would enhance the depth and completeness of the research.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Clarification on Consolidation Process: Could the authors provide more details regarding the feature aggregation network used to consolidate the intermediate feature maps? How does it handle variations in scale and time? Are there any specific design choices or architectural considerations that impact its performance?
Experimental Setup: It would be helpful to understand the specific experimental setup employed in the evaluation of Diffusion Hyperfeatures. Could the authors elaborate on the dataset used, the selection of keypoint correspondence task, and the metrics used for evaluation? Additionally, is there any consideration given to computational efficiency or runtime performance when applying the framework to real-time applications?
Comparison with Existing Methods: The authors briefly mention that existing works select specific subsets of layers and timesteps from diffusion models for different tasks. Could the authors provide a more detailed comparison or discussion on how their approach differs and potentially outperforms these existing methods? Are there any specific limitations or challenges associated with the subset selection approach that Diffusion Hyperfeatures address?
Transferability and Generalization: While the paper mentions generalization to synthetic images, it would be valuable to explore the transferability and generalization capabilities of Diffusion Hyperfeatures across other domains or datasets as well. Can the authors provide insights or experiments on how the framework performs on different benchmark datasets or real-world scenarios beyond the SPair-71k dataset?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. *Clarification on Consolidation Process.*
We would be more than happy to provide additional details regarding the aggregation network, what specific points were unclear? We discuss details regarding the feature aggregation network as well as how we handle variations in scale and time of the diffusion features in L140-146 of the main paper. We ablate specific design choices of our method such as the number of diffusion steps or the underlying model variant in Section 4.2 of the main paper.
2. *Experimental Setup.*
We train our aggregation network on the SPair-71k dataset, and we compare our method against the baselines on both SPair-71k and CUB in Table 1 of the main paper. **We also compare against additional methods such as CATS++ and DINOv2 in Table 1 of the global response, where we outperform both methods by 2\% and 4\% PCK\@0.1_img respectively.** We report using the percentage of correct keypoints (PCK), discussed further in L169 of the main paper. We depict visualizations of our predicted correspondences in Figures 4, 5, 6 of the main paper.
3. *Comparison with Existing Methods.*
We compare against SD-Layer-4, a baseline that selects a specific subset of layers and timesteps. We provide a detailed discussion comparing against this baseline in Section 4.1 of the main paper, including a discussion of how subset selection can result in subpart performance on the keypoint matching task.
4. *Transferability and Generalization.*
We provide additional results on real-world datasets such as CUB in Table 1 of the main paper and **PF-WILLOW, PF-PASCAL in Table 3, 4 of the global response where we outperform other methods like DINOv2 by 2\% and 3\% respectively.**
---
Rebuttal Comment 1.1:
Comment: I acknowledge I have read the rebuttal. | Summary: This paper proposes diffusion hyperfeatures, a framework for integrating different scale and timestep features to form a representative feature descriptors in dense level. Within the U-Net architecture, unlike other works that uses hand-crafted methods to select a particular subset of layers for further processing, this paper proposes a simple aggregation network that aggregates the intermediate features. The effectiveness of this method is evaluated on standard benchmark of semantic correspondence, including SPair-71k.
Strengths: 1. SOTA performance is achieved with large gap to existing works.
2. One of the first attempt to tackle semantic correspondence task with diffusion.
3. Although paper is easy-to-read, each paragraphs are too long. Also it is quite hard to understand some sentences. This is a minor issue though.
Weaknesses: 1. This paper is one of the first attempt to tackle semantic correspondence with diffusion concept. In this manner, this paper looks novel, since this is very new to this task. However, to me, this work severely lacks contributions. What this paper does is simply using a series of features from stable diffusion, a large-scale model, feeds to a simple aggregation network that simply performs weighted sum to obtain the final features and it is exploited for Winner-Takes-All (WTA) for finding correspondences.
(1) An attempt to select features are not novel as this is already smartly done in HPF [23] and DHPF [25], while this work simply uses all the features to feed to an aggregation network.
(2) Using different feature representation that is much powerful than standard backbone (resnet 101) will guarantee apparent performance boosts. This is already demonstrated in CATs++: Boosting Cost Aggregation With Convolutions and Transformers (TPAMI).
(3) No novel diffusion techniques are proposed as far as i understood.
3. Lacking implementation details. What resolution was used for evaluation and training? This is very important in this task as the performance highly correlates to it. Unless this is evaluated in the same resolution and compared with other works, the comparisons are not fair at all.
4. Why did the authors use only 2 datasets for evaluation? Traditionally, existing methods would also evaluate their methods on Pf-PASCAL, PF-WILLOW, TSS as well. This needs to be included, since evaluation on 2 datasets seem very insufficient.
5. In supplementary material, section 6.1 explains computational resources. This is very abstract and not very helpful. Comparison to other works is not a must, but at least provide some measures for completeness.
6. DINOv1 is a quite old-model, since DINO-v2 is already out. DINO-v2 is a much powerful model that will yield much stronger performance.
7. Many other semantic correspondence works are not cited and compared. As far as I know, the current SOTA is Integrative Feature and Cost Aggregation with Transformers for Dense Correspondence, arxiv'22, which would be better to be included in the paper. Also when it is compared with DINO, only 1 layer is used. In semantic correspondence, multi-scale and multi-level features are highly important which would have detrimental effects on performance without them. So the comparison may not be fair here as well. However, as the authors also provided results for single layer results, this is not an issue, but a suggestion to include results that use more layers of DINOv1 or DINOv2.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: See weaknesses above
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: Limitations or failure cases are not presented.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. *This paper is one of the first attempt to tackle semantic correspondence with diffusion concept.*
Our method is novel. We are not claiming to make a contribution for a feature extraction or matching algorithm hand-crafted for the task of semantic correspondence; rather, we propose a simple and general framework for utilizing features across the diffusion process. **Additionally, our final method achieves state-of-the-art performance on the SPair-71k benchmark, even when compared to DINOv2 and CATS++ (Table 1 of the global response).** According to the NeurIPS reviewer guidelines on the topic of originality, it states “Is the work a novel combination of well-known techniques? (This can be valuable!)”. We present a simple and effective framework that achieves superior performance compared with prior work, similar in spirit to other published works of this nature such as ODISE [1].
2. *What resolution was used for evaluation and training?*
We acknowledge that input resolution does have a large impact on performance as shown in Cho et. al. [2], and we use the same evaluation protocol as Truong et. al. [4]. Specifically, we run all the baselines from their respective codebases using their default hyperparameter settings, which we display in Table 6 above, and “compute the metrics on the standard setting, i.e. the original image size, and re-compute the PCK in this setting” [4]. **It is important to note we evaluate our method with downsampled images of resolution 224, following Amir et. al. [3], which is the *lowest input resolution* out of all the methods and is a fair comparison of all methods in the main paper.** In our additional experiments, to account for DINOv2’s large patch size, we use input images of resolution 770 so that it produces feature maps at resolution 55x55, the same as DINOv1. In a similar vein we use input images of resolution 512 for CATS++ to compare against the best possible variant of the method. During training time we use input images of resolution 64x64, the resolution of Stable Diffusion’s latent space.
**Table 6. Method Statistics**
| Model | Input Image Resolution |
|:---------------------|:--------------:|
| DINO | 224 |
| **Ours** | **224** |
| DHPF | 240 |
| CATS++ | 512 |
| DINOv2 | 770 |
3. *Why did the authors use only 2 datasets for evaluation?*
**Please see the global response where we run additional experiments on PF-PASCAL and PF-WILLOW, where we outperform other methods like DINOv2 by 2\% and 3\% respectively.**
4. *In supplementary material, section 6.1 explains computational resources.*
We agree that this section could be more informative, are there any metrics in particular you would like?
5. *DINOv1 is a quite old-model, since DINO-v2 is already out.*
We did not initially compare to DINOv2 because it was released shortly before the submission deadline and has yet to be published. Nevertheless, we have included the results for DINOv2 and we will include it in the final manuscript. **Please see the global response where we compare against DINOv2 and outperform the method by 4\% PCK\@0.1_img on SPair-71k.**
6. *Many other semantic correspondence works are not cited and compared.*
**Please see the global response where we compare against CATS++ and outperform the method by 2\% PCK\@0.1_img on SPair-71k.** Unfortunately, IFCAT [5] does not have code available for us to re-run their method in our standardized evaluation setting, and for us to evaluate their transfer performance from SPair-71k to other datasets, but **comparing our results with their reported results we still outperform IFCAT [5] by 0.2\% PCK\@0.1_bbox (64.61 vs 64.40 respectively).** Our DINO baseline uses the method proposed by Amir et. al. [3], which uses a single layer, so we provide results using a single layer of Stable Diffusion (SD-Layer-4) for comparability.
[1] Xu et. al. “Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models.” CVPR 2023. \
[2] Cho et. al. “CATs++: Boosting Cost Aggregation with Convolutions and Transformers.” TPAMI 2022. \
[3] Amir et. al. “Deep ViT Features as Dense Visual Descriptors.” ECCV-W 2021. \
[4] Truong et. al. “Probabilistic Warp Consistency for Weakly-Supervised Semantic Correspondences.” CVPR 2022. \
[5] Hong et. al. “Integrative Feature and Cost Aggregation with Transformers for Dense Correspondence.” arXiv 2022.
---
Rebuttal Comment 1.1:
Title: response (1/2)
Comment:
Prior to my response, I would first like to thank the authors for their thorough responses. From the rebuttal, the following concerns of mine remain unresolved.
(1) **Novelty**: From the authors' response regarding the novelty, authors first mention that the main contribution lies on "proposing a simple and general framework for utilizing features across the diffusion process". As I already mentioned in my initial review, I agree that bringing semantic correspondence with diffusion concept is novel. However, this will be a sufficient contribution to pass the standard of top-tier conference in this field only if sufficient, persuasive and informative quantitative and qualitative analysis, results and experiments are presented. Taking examples of the existing works, [A] and [B], I believe this paper presents an analysis that had already been investigated more thoroughly in the previous paper [A]. For example, [A] carefully visualizes feature maps of each time steps to find what semantics or what represenetations are learnred. Also, compared to the concurrent work [B], which has a solid motivation (why diffusion should be incorporated into this task) and detailed analysis with sufficient visualizations, this paper only visualizes intermediate features as if they are intermediate features of a feature backbone. I will list some of the other papers that also perform such investigation in a more thorough way: [C,D] .I got an impression that the authors are simply using diffusion model as a very heavy **"feature backbone"**. In this manner, it simply looks like A works well, let's use it for task B. Without a solid motivation, analysis and investigation, I believe this impression is unavoidable.
Moreover, the authors mentioned **our final method achieves state-of-the-art performance on the SPair-71k benchmark, even when compared to DINOv2 and CATS++ (Table 1 of the global response)**. I want to first ask that is the performance contribution of this paper? From my understanding, as it is already shown in [B] as well, simply using DINOv2 feature or SD feature to find correspondences with Winner-Takes-All method already yields SOTA performance in this task. This means that it is the contribution of those backbone networks, not this paper's contribution. In semantic correspondence literature, many works have been proposing novel ideas to perform well under situations where background clutters or intra-class variations pose additional challenges. However, in this paper, SOTA is achieved by simply using the features to feed into the simplest aggregation network. To be strictly speaking, it's not SOTA as well, because a concurrent work [B], achieves higher performance with more judicious use of diffusion feature along with DINO.
Now I want to talk about the technical novelty as well. I have four concerns.
(1) Whether a means to develop diffusion techniques?
(2) Whether a means to effectively or efficiently aggregate the selected features is proposed?
(3) Whether a means to select the hyperfeatures are justified by thorough analysis and experimental results?
As far as I understood, (1) is not proposed. Also investigations of feature maps or representations in diffusion models are already conducted in the referenced papers below. (2) novel aggregation approach is also not proposed, as it is just a concatentation and feeding into a network. CHM, TransforMatcher, NC-Net and many other works propose to aggregate the inputs in novel ways, while this approach looks not novel to me. (3) DHPF and HPF propose novel ways to select the features, provide sufficient analysis to justify the selections. However, in this paper, features are used as if they are just from an another backbone network.
(2) **Resolution** : In semantic correspondence task, indeed evaluation resolution is important, but this is also the same for training resolution as well. This is why CATs++ provided experimental results that verify the impacts of the resolution. I understand that training on low resolution may pose challenges in this framework, as this framework needs to exploit pretrained weights. So this is just a minor concern that will not affect largely to my rating.
(3) **incomplete experiments** : In the initial submission, the fact that only two datasets were used for evaluation poses a concern. It seems like the submission was an unfinished product, that was complemented by the reviews.
[A] LABEL-EFFICIENT SEMANTIC SEGMENTATION WITH DIFFUSION MODELS ICLR'22
[B] A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence arxiv'23
[C] Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation
[D] Diffusion Models already have a Semantic Latent Space
---
Reply to Comment 1.1.1:
Comment: 1. *Novelty.*
We would like to re-iterate the Neurips review policy regarding comparisons to recent work: “Authors are not expected to compare to work that appeared only a month or two before the deadline.” **[B] appeared in an unofficial preprint after Neurips submission deadline, and should be considered concurrent to our work.** Nevertheless, we believe that our submission already presents “persuasive and informative quantitative and qualitative analysis, results and experiments” which is confirmed by the other reviewers:
- “The idea is simple and seems very effective, as authors illustrate with a number of experiments and ablation over baselines.” (Reviewer sN43)
- “Argumentation for design choices is very solid, and authors motivate their approach with an experiment showing the validity/salience of features found in earlier timesteps of the diffusion model.” (Reviewer sN43)
- “The exploration into what feature representations are learned at different layers/times is interesting (Figures 2 and 3).” (Reviewer HmuP)
1a. *Whether a means to develop diffusion techniques?*
We are unsure of why the reviewer thinks that previous work investigating diffusion features for other tasks diminishes our contribution. Our work shows how various naive applications of diffusion models to semantic correspondence (picking a single layer, concatenating across all layers) aren’t fully utilizing the representations present across layers and timesteps, and present an effective method to aggregate this information. We are also the first to leverage features from the inversion process, which we discuss in Figure 3 of the main paper and further ablate in Figure 1 of the global response PDF. **Reviewer HmuP notes this as a strength: “Using inversion to get features for real images is a good idea and the explanation that they are more reliable than sampling from the posterior as used in prior work makes sense.”**
1b. *Whether a means to effectively or efficiently aggregate the selected features is proposed?*
While the concept of feature aggregation isn't new, our approach has distinct merits. We are the first to aggregate diffusion features into a single concise descriptor, where our aggregation of features reduces the memory consumption by 300x (1.8 GB to 6 MB) compared with naive concatenation, described further in Table 8 below. Furthermore, our network's design, including the use of shared bottleneck layers across timesteps, further reduces memory usage by 31x, as discussed further in the global response.
1c. *Whether a means to select the hyperfeatures are justified by thorough analysis and experimental results?*
We analyze the selection of features learned by our model in Figure 5 of the main paper, which should be comparable to Figure 3 of DHPF [6] which similarly analyzes layer selection frequencies. We are the first to learn this selection not only across model layers but also diffusion timesteps. We provide a detailed discussion of our learned selection in L238-L256 in the main paper and Section 6.2 of the Supplementary, where we also discuss how the layers and timesteps at which the features are most “semantic” differ across Stable Diffusion variants. **Reviewer HmuP agrees that our approach “is shown to outperform [...] hypercolumns (Table 1) showing that useful features have been extracted from the diffusion model; the baselines including using a single layer from the diffusion model demonstrate the benefit of combining over layers/time.”**
Overall we believe that the argument that our method is not novel because its components utilize existing techniques is unfair as most works are compositions of existing methods. We explore naive applications of diffusion models to semantic correspondence, show that these methods rely heavily on hand-selected hyperparameters and fail to fully utilize the information present across time steps and layers, and propose a method to aggregate this information in a memory-efficient manner.
3. *Experiments. In the initial submission, the fact that only two datasets were used for evaluation poses a concern.*
Respectfully, we disagree with this sentiment. We evaluate SPair-71k and CUB because these were specifically proposed to overcome the limitations of previous semantic correspondence datasets which “do not display much variability in viewpoint, scale, occlusion, and truncation” [8]. It has also been stated in prior work that “PF-PASCAL [...] is almost saturated, which makes a comparison difficult” [9]. While 2 datasets may seem like a limited evaluation, we believe the value of an evaluation lies in the quality of the datasets rather than the quantity.
Furthermore, we would like the point out that it is common to address reviewer feedback with additional experiments. For example, the OpenReview page for CATS [10] reveals that the reviewers’ decision was swayed positively because their “concern[s] have been addressed [...] thanks to the additional experiments.”
---
Rebuttal Comment 1.2:
Title: response( 2/2)
Comment: (4) **computational complexity** : In section 6, the paper says " our method is fast and uses a reasonable amount of memory". What I meant in the initial review was that this sentence does not provide any clues of how much memory footprint, run-time and all other computation related measurements (e.g., FLOPS). I don't understand why the authors are asking the reviewer again for the suggestion of computational metric.
For now, my current thoughts are these.
I am happy to have a discussion whole this week.
Looking forward to author's response.
Thanks.
---
Reply to Comment 1.2.1:
Comment: 4. *Computational Complexity. [...] clues of how memory footprint, run-time and all other computation related measurements (e.g., FLOPS).*
In Table 8 we give precise run-time and memory statistics of our method, including (a) the size of the descriptor used for the method’s underlying matching algorithm and (b) the average inference time of feature extraction and matching for each pair on the SPair-71k dataset. Our method is able to consolidate the same large set of features from a diffusion model from 1.8 GB (SD-Concat-All, which uses naive concatenation) to 6 MB (Ours). While our full method explores the upper bound by utilizing all available features across the diffusion process, which takes 6.62s, one can also use the same pretrained weights to evaluate faster pruned versions of our model. Below we show running our method after stopping the diffusion process after 1, 5, and 10 timesteps. The pruned variant of our method that utilizes the first 10 timesteps performs close to our full method, with a 4\% improvement in PCK\@0.1_img over DINOv2 with an almost 2x faster inference process. Note that the inference time for our DINOv2 baseline is relatively slow because it uses the method from Amir et. al. [3] which includes a log binning algorithm to contextualize the features into descriptors.
**Table 8. Memory and run-time comparison.**
| Model | SPair-71k PCK\@0.1_img | Memory per Descriptor | Inference Time per Pair (s) |
|:---------------------|:-----------:|:------------:|:------------:|
| DINOv2 | 68.33 | 75 MB | 2.99 |
| CATS++ | 70.26 | 131 MB | 0.16 |
| SD-Layer-4 | 58.80 | 10 MB | 0.33 |
| SD-Concat-All | 52.12 | 1.8 GB | 0.87 |
| Ours (1 timesteps) | 64.61 | 6MB | 0.28 |
| Ours (5 timesteps) | 69.28 | 6MB | 0.86 |
| Ours (10 timesteps) | 72.00 | 6MB | 1.60 |
| Ours (50 timesteps) | 72.56 | 6MB | 6.62 |
[B] Zhang et. al. "A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence." arXiv 2023.\
[6] Min et. al. “Learning to Compose Hypercolumns for Visual Correspondence.” ECCV 2020.\
[7] Podell et. al. “SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis.” arXiv 2023.\
[8] Min et. al. “SPair-71k: A Large-scale Benchmark for Semantic Correspondence.” arXiv 2019.\
[9] Hong et. al. “Cost Aggregation with 4D Convolutional Swin Transformer for Few-Shot Segmentation.” ECCV 2022.
[10] Cho et. al. “CATs: Cost Aggregation Transformers for Visual Correspondence.” Neurips 2021. | Summary: This paper proposes improving feature distillation from diffusion models for representation learning by aggregating information from the feature maps of the U-Net at varying timesteps, weighting them with a tunable aggregation network. The authors show that even at timesteps from which features are usually discarded for feature learning (i.e. early steps in the generation process) useful features can be found. After formulating their approach to extracting these features using a weighted aggregation of a set of standardized input feature maps, authors show validity of their approach by improving considerably over baseline methods and prior work that only selects a single diffusion timestep for feature extraction. Lastly, authors show the ability of their framework to generalize to unseen synthetic data (generated by the diffusion model), by extracting features from the generation process for a diffusion model trained on a different data distribution.
Strengths: - The idea of extracting features from powerful generative models such as diffusion models is very relevant, and the authors show that their approach to this extraction process obtains great results.
- The paper is very well-written and reads very smoothly. Authors are very descriptive in their wording and use figures to clearly illustrate their approach. The idea is simple and seems very effective, as authors illustrate with a number of experiments and ablation over baselines.
- Argumentation for design choices is very solid, and authors motivate their approach with an experiment showing the validity/salience of features found in earlier timesteps of the diffusion model.
- The authors also show impressive transfer performance to an unseen dataset, highlighting possible applications in pseudo-label generation for semantic correspondence.
Weaknesses: - I have no major concerns. A minor concern I have is with regards to novelty, given that extracting hypercolumn features has been attempted before, even for diffusion models.
- Another minor concern is the requirement for task-specific fine-tuning of the aggregation network. The fact that this network requires explicit supervision for a task means that the actual representation extracted from the diffusion model that can be used in arbitrary downstream tasks is the whole set of features maps across timesteps. Would it be possible to train an aggregation network on a self-supervised task?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - For figures 2, 3, would it be possible to show also the network input at these timesteps for reference?
- Line 117, you mention that “these observations indicate that the diffusion model provides coarse and fine features that capture different image characteristics, throughout different combinations of layers and timesteps”. I’m wondering to what extent this is owing to the formulation of the diffusion process itself, and to what extent this is due to the use of a very specific multi-resolution processing architecture (U-Net) in the diffusion process. I.e. is the extraction of both coarse and fine image features a result of diffusion or a result of using a U-Net? Would your method also work with diffusion models that make use of other architectures?
- Line 135, I have a bit of trouble understanding your reasoning for the more “trustworthy” inversion features. Could you elaborate? Why would repeated application of the model necessarily lead to more “trustworthy” features?
- You apply your feature extraction specifically for semantic correspondence detection. Could the extracted features also be used in other settings? Are there any limiting factors that prevent your framework from being used in such settings? Have you tried e.g. classification of extracted features?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors do not discuss the limitations or societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. *For figures 2, 3, would it be possible to show also the network input at these timesteps for reference?*
In Figure 2 of the main paper, the input to the diffusion model is the text prompt “Cat sitting in a living room” and random noise for $x_{T}$. In Figure 3 of the main paper, the input to the diffusion model is the empty text prompt “” and a noisy version of the real image for $x_{25}$. We agree that the inputs can be unclear and will revise these figures in the final manuscript.
2. *Line 117 [...] is the extraction of both coarse and fine image features a result of diffusion or a result of using a U-Net?*
We agree that the multi-scale nature of our features across layers is in large part due to the U-Net architecture used in the underlying diffusion model. However, prior work such as Amir et. al. [1] also finds that ViT architectures can also contain this type of coarse vs. fine information, where “shallow features mostly contain positional information” and “deeper layers [...] favor [...] more semantic features.” We note that the evolution of these features over time, owing to the nature of the diffusion process itself, is also a critical component of our method. In Table 1 of the main paper our final method that uses all layers across all timesteps significantly outperforms the variant of our method that uses all layers at a single timestep (Ours - One-Step) by 9\% in PCK\@0.1_img. Intuitively, timesteps when the input is noisier in the diffusion process produce features that capture more low frequency statistics that provide orthogonal information also useful for the semantic correspondence task, compared to when the input is clean as seen in Figure 2 of the main paper.
3. *Line 135 [...] more “trustworthy” inversion features”*
**In the PDF document attached to the global response Figure 1 quantitatively compares the performance of inversion vs. generation features at each timestep, where inversion features generally perform better across the board with as much as a 1-5\% increase in PCK\@0.1_img in noisier timesteps $t=25$ to $t=50$.** Because the DDIM inversion process should be able to deterministically recover the real image and the model predicts the noise added to the image at each timestep, we hypothesize that the predicted noise carefully destructs information at a specific band of frequencies appropriate for the timestep, compared with using random noise. Beyond DDIM inversion, one interesting further direction of research is exploring the quality of features derived from other inversion processes proposed in the community.
4. *Could the extracted features also be used in other settings?*
Yes, it certainly should be able to be used for other tasks such as classification, for example if one were to add a classification head to process the features and train it with the appropriate loss. We have also included in the global response PDF some new applications in semantic appearance transfer and video mask propagation.
[1] Amir et. al. “Deep ViT Features as Dense Visual Descriptors.” ECCV-W 2021.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I would like to thank the authors for taking the time to answer mine and other reviewers questions and concerns. Reviewer c4Xr makes a number of solid points, especially his concerns regarding lacking experimental details seem important to resolve before finalizing this submission.
The authors provide a solid rebuttal with extensive new experiments. I understand reviewer c4Xr's concern regarding the impact of these large modifications to the review process, but I personally think it's a valid way of improving the manuscript's quality. I think these modifications should be weighed in favour of acceptance in the final decision, as they strenghten the manuscript. I stand by my recommendation of acceptance, but understand the reluctance of other reviewers. I think the idea is simple and effective, and after rebuttal I think the manuscript is strong and reads well. | Summary: This paper proposes an approach for extracting useful features for pre-trained diffusion models for application to dense visual correspondence tasks. How to do this is not clear due to presence of features both through the network, and over diffusion steps. The proposed approach is to learn which features to use using the hypercolumns framework, by passing activations through bottleneck layers then averaging over layers and time. Experiments show that this method outperforms unsupervised DINO features and supervised hypercolumns.
Strengths: - The premise of extracting useful features from diffusion models is an important one; we know that those features are present but it is not clear how to extract useful ones.
- Using inversion to get features for real images is a good idea and the explanation that they are more reliable than sampling from the posterior as used in prior work makes sense.
- The exploration into what feature representations are learned at different layers/times is interesting (Figures 2 and 3).
- The approach is shown to outperform unsupervised DINO features as well as hypercolumns (Table 1) showing that useful features have been extracted from the diffusion model; the baselines including using a single layer from the diffusion model demonstrate the benefit of combining over layers/time; and visual examples shown in Figures 4-6 demonstrate the approach working.
Weaknesses: - Unlike using DINO features, the proposed approach requires training additional components for downstream tasks, making it much less versatile.
- The baseline models in Table 1 are a poor representation of current approaches. DINO features are not trained specifically for keypoint matching and newer supervised approaches such as CATS++ (Cho et al. TPAMI 2022) and VAT (Hong et al. ECCV 2022) substantially outperform DHPF.
- The approach shares bottleneck layers across time steps; this seems potentially problematic since features differ substantially across different times.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How does it quantitatively compare against more recent supervised approaches?
- Why share bottleneck layers over time steps? They contain very different representations.
- Why not use attention features from the diffusion model in a similar manner to DINO?
- Aggregating features over time steps is difficult, hence the trained layers on top. Could using a distilled diffusion model such as a consistency model give better representations?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: As stated in the submission details, limitations are not discussed. There are a number of limitations including having to train additional components (unlike DINO), not comparing with more recent supervised methods, and sharing layers across time steps.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. *How does it quantitatively compare against more recent supervised approaches?*
**Please see the global response for a comparison to CATS++ and DINOv2, where we outperform both methods on SPair-71k by 2\% and 4\% PCK\@0.1_img respectively.**
2. *Why share bottleneck layers over time steps?*
Please see the global response for an ablation of individual bottleneck layers per timestep, where **we demonstrate that our final method with shared bottleneck layers performs comparably within 1\% PCK\@0.1_img on SPair-71k with significant savings in memory consumption.**
3. *Why not use attention features from the diffusion model in a similar manner to DINO?*
**We find that the residual block (resblock hidden) and the best performing attention features (attn value) perform comparably, which we ablate in Table 7.** For an individual layer (SD-Layer-4) resblock hidden outperforms against attn value by 3\% PCK\@0.1_img, and when concatenating all layers (SD-Concat-All) resblock hidden underperforms against attn value by 4\% PCK\@0.1_img. In practice it is more straightforward to use these residual block features because selecting amongst the key, query, value, or token features requires additional hyperparameter tuning and can significantly vary across models. For example, while Amir et. al. [1] finds that the Layer 9 key features perform the best in DINOv1, in Table 5 of the global response we demonstrate that these same features perform the worst in DINOv2.
**Table 7. Residual Block Features vs. Self-Attention Features (SPair-71k PCK\@0.1_img)**
| Model | resblock hidden | attn key | attn query | attn value | attn token |
|:---------------------|:-----------:|:------------:|:------------:|:------------:|:------------:|
| SD-Layer-4 | **58.80** | 50.73 | 53.14 | 55.60 | 49.62 |
| SD-Concat-All | 52.12 | 44.84 | 48.68 | **55.79** | 55.32 |
4. *Could using a distilled diffusion model such as a consistency model give better representations?*
Since consistency models [2] directly map noise to data, without an iterative sampling process, it cannot provide features that vary across timesteps. While this may yield a faster feature extraction process, it provides fewer features for our aggregation network to select from, which can degrade performance. For example, our method that uses features from a single timestep (Ours - One-Step) performs 9\% worse in PCK\@0.1_img than our method that uses features from all timesteps (Ours) in Table 1 of the main paper.
5. *As stated in the submission details, limitations are not discussed.*
We will be sure to include a more extensive discussion of our method’s limitations in the final manuscript.
[1] Amir et. al. “Deep ViT Features as Dense Visual Descriptors.” ECCV-W 2021.\
[2] Song et. al. “Consistency Models.” ICML 2023.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: I would like to thank the authors for their responses. I still have concerns regarding the evaluation, however, the rebuttal did address my concerns to some degree and I will increase my score to borderline accept accordingly.
In particular, I thank the authors for the experiment showing the impact of having different bottleneck layers per time step, this addresses my concern that sharing across all steps could be problematic. Similarly, the explanation on resblock vs attention features addresses that question. The added quantitative comparison to another supervised approach (CATS++) is very informative and it is nice to see that this approach is able to outperform existing supervised methods, going some way to address my concerns on baselines; as do the added results on other datasets/applications and generalisation to different domains.
My main remaining concern is still the lack of baselines meaning that it is hard to determine concretely where performance improvements come from - e.g. is it from hyperfeatures + a stronger feature backbone, or is it from the diffusion model in particular. As such, I agree with the other reviewers’ concerns on novelty (I have not taken into account the concurrent diffusion semantic correspondence papers mentioned by the other reviewers); aggregating over time is a small novelty, but in my opinion the main contribution comes from the experiments showing that diffusion features are especially useful for this task and worth using over other faster to obtain features, which I think is lacking. While the paper does this to some degree, and the added supervised results help, as mentioned by the other reviewers, there are many more methods including more supervised (e.g. IFCAT), other generative models, aggregation networks on self-supervised networks, existing methods that extract feature descriptors from diffusion models, etc. that it would be really informative to compare/ablate against to show that the features learned by the diffusion model really are especially useful.
To summarise, since the problem of extracting features from diffusion models is of interest and as added in the rebuttal the approach appears to perform well compared to recent supervised approaches, I have increased my score. But this paper would much stronger if we knew better where the improvements come from through a stronger set of baselines/ablations, so I have only increased my score to borderline accept. Happy to discuss with the authors if they disagree with any of these comments.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for taking the time to review our additional results and for providing feedback regarding our evaluation. In Table 9, we ablate the effect of single layer selection, naive concatenation, and training an aggregation network for DINO and DINOv2, symmetric to the ablations we performed for our method. Note that to ensure a consistent evaluation across all backbones, we use the same input resolution of 224. Training an aggregation network noticeably improves the performance of both DINO and DINOv2 by 3\% and 8\% respectively. Interestingly, single layer selection of both DINOv2 and Stable Diffusion features (SD - Layer 4) performs comparably, but training an aggregation network on top of the Stable Diffusion features (i.e., over both timesteps and layers) yields a larger relative boost (14\% PCK\@0.1_img) than DINOv2. **Ultimately, our method that trains an aggregation network on top of Stable Diffusion features performs the best at 72.56\% PCK\@0.1_img, compared with 54.69\% and 68.37\% PCK\@0.1_img for DINO and DINOv2 respectively.** In Table 10 we also verify that our aggregation network on top of DINO features learns mixing weights consistent with the hand-selected features explored in Amir et. al. [3], where it indeed learns that Layers 9 - 11 are particularly useful for the semantic correspondence task. Since IFCAT [4] does not have publicly available code, we display the result reported in the original paper. As such, with these additional baselines we validate that the strong performance is not only from the strong backbone but also the fact that we are aggregating across the layers and timesteps of a diffusion model in particular.
**Table 9. SPair-71K**
| Model | PCK\@0.1_img | PCK\@0.1_bbox |
|:---------------------|:-----------:|:------------:|
| IFCAT [4] | - | 64.40* |
| | |
| DINO [3] | 51.68 | 41.04 |
| DINO - Concat All | 20.17 | 13.60 |
| DINO + Aggregation Network | 54.69 | 44.29 |
| | |
| DINOv2* | 60.14 | 46.94 |
| DINOv2 - Concat All | 60.89 | 47.69 |
| DINOv2 + Aggregation Network | 68.37 | 56.35|
| | |
| SD - Layer 4 | 58.80 | 46.58 |
| SD - Concat All | 52.12 | 41.83 |
| Ours | **72.56** | **64.61** |
|*Since IFCAT does not have publicly available code, we take the result reported in the original paper.|
|*In our previous reported experiments with DINOv2 in Tables 1-4, we used input images of resolution 770 to account for its large patch size. To ensure a fair evaluation in this table, we use input images of resolution 224 for all methods.|
**Table 10. DINO + Aggregation Network Learned Mixing Weights**
| Layer | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11|
|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|
| | 4% | 3% | 3% | 3% | 4% | 8% | 5% | 9% | 11% | **14%** | **14%** | **14%**|
[4] Hong et. al. “Integrative Feature and Cost Aggregation with Transformers for Dense Correspondence.” arXiv 2022. | Rebuttal 1:
Rebuttal: ### Summary
We thank the reviewers for their helpful feedback and suggestions, which we will integrate into the final manuscript. In this work we present a “simple and [...] very effective” (Reviewer sN43) framework for consolidating the internal representations of a diffusion model for tasks such as semantic correspondence.
To re-state our motivation: prior work shows that features from a hand-selected layer & timestep can be useful for downstream applications. Our work demonstrates that the most useful features are not contained within a single layer and timestep, but rather are distributed across _all layers and timesteps_ of the diffusion sampling process (different choices of time and layer often contain complementary information, as seen in Figure 2 of the main paper). Unfortunately, naive concatenation across all features results in excessively high dimensional descriptors that equally weigh all source features, making distance metrics less practical and useful. Our final proposed approach solves this by aggregating these high-dimensional features into a more useful low-dimensional descriptor map that is trainable for a given target task. **We have performed additional evaluations on PF-WILLOW and PF-PASCAL to demonstrate that our method is transferable. We provide them below, along with new applications in semantic appearance transfer and video mask propagation (see PDF).**
The reviewers agree that “the premise of extracting useful features from diffusion models is an important one” (Reviewer HmuP) and that it is “very relevant” (Reviewer sN43). **Moreover, we clearly demonstrate that we achieve “SOTA performance [...] with large gap to existing works” (Reviewer HmuP), which we further validate with new comparisons to CATS++ and DINOv2 in Table 1 below.**
**Table 1. SPair-71K**
| Model | PCK\@0.1_img | PCK\@0.1_bbox |
|:---------------------|:-----------:|:------------:|
| DINOv2 [1] | 68.33 | 56.98 |
| CATS++ [2] | 70.26 | 57.06 |
| Ours - Indiv. Bottleneck per Timestep | 73.07 | 65.09 |
| Ours | **72.56** | **64.61** |
**Table 2. CUB**
| Model | PCK\@0.1_img | PCK\@0.1_bbox |
|:---------------------|:-----------:|:------------:|
| DINOv2 [1] | 89.96* | 76.83* |
| CATS++ [2] | 75.92 | 59.49 |
| Ours | **82.29** | **69.42** |
|*Please note that DINOv2’s training set included CUB|
### Comparison to CATS++ and DINOv2
**In Table 1, we display results for both CATS++ and DINOv2 on SPair-71k. While these methods are competitive, our method still achieves the best result at 72.56 in PCK\@0.1_img, with a 2\% increase over CATS++ and 4\% increase over DINOv2.** For CUB, PF-PASCAL, and PF-WILLOW (Table 2, 3, 4) we outperform CATS++ by 6\%, 19\%, and 11\% in PCK\@0.1_img respectively. Similarly, DINOv2 performs worse than our method across the board, except for CUB where it exhibits unusually high performance *because it was trained on samples from CUB* (Table 15 of the Appendix in [1]). In Table 5 we display the hyperparameter sweep we conducted for DINOv2 ViT-S/14, leading us to use the Layer 11 token features for our DINOv2 baseline.
### Individual Bottleneck Layer per Timestep Ablation
Our choice to share bottleneck layers is an effort to reduce model size at the cost of a slight performance decrease. **As seen in Table 1, both the method with individual bottleneck layers per timestep and our final method perform similarly, with less than a 1\% degradation in PCK\@0.1_img.** Training a bottleneck layer for each timestep would require 132 projection layers (812.85 MB) compared to just 12 projection layers (26.12 MB) when the bottleneck layers are shared.
**Table 3. PF-PASCAL [3]**
| Model | PCK\@0.1_img | PCK\@0.1_bbox |
|:---------------------|:-----------:|:------------:|
| DINOv2 [1] | 84.30 | 78.99 |
| CATS++ [2] | 68.02 | 62.96 |
| Ours | **86.67** | **82.85** |
**Table 4. PF-WILLOW [4]**
| Model | PCK\@0.1_img | PCK\@0.1_bbox |
|:---------------------|:-----------:|:------------:|
| DINOv2 [1] | 86.64 | 71.34 |
| CATS++ [2] | 78.87 | 66.09 |
| Ours | **89.61** | **77.98** |
### Additional Evaluation Datasets
We focused on SPair-71k and CUB because they presented more complex and varied examples than the other benchmarks, which are largely composed of simple image pairs with “similar viewpoints and scales” [5]. That being said, we are happy to include results on PF-PASCAL and PF-WILLOW (Table 2, 3 above). Note that across all these datasets we transfer the model purely trained on SPair-71k for evaluation. **When transferred to PF-PASCAL and PF-WILLOW, our method outperforms DINOv2 by 2\% and 3\% PCK\@0.1_img respectively.**
**Table 5. DINOv2 Hyperparameter Selection (SPair-71k PCK\@0.1_img)**
| Model | key | query | value | token |
|:---------------------|:-----------:|:------------:|:------------:|:------------:|
| dinov2_vits14 - Layer 9 | 18.74 | 19.10 | 52.47 | 55.08 |
| dinov2_vits14 - Layer 11 | 36.97 | 36.46 | 64.25 | **68.33** |
**References**\
[1] Oquab et. al. “DINOv2: [...].” arXiv 2023.\
[2] Cho et. al. “CATs++: [...].” TPAMI 2022.\
[3] Ham et. al. “Proposal flow.” CVPR 2016.\
[4] Ham et. al. “Proposal flow: Semantic correspondences from object proposals.” PAMI 2017.\
[5] Min et. al. “SPair-71k: [...].” arXiv 2019.
Pdf: /pdf/fb10d7d42d56d1dcaafa122f178a9f96fc9ad530.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper explores semantic correspondence tasks with stable diffusion model. Specifically, the authors proposed to first extract feature maps varying across timesteps and layers from the diffusion process and trains a lightweight neural network to aggregate them together for semantic correspondence.
Experimental results on CUB-200 and SPair-71k datasets show that proposed method outperforms other baselines.
Strengths:
- This paper shows that one could explore diffusion models for semantic correspondence tasks.
- the writing is clear and easy to follow.
- The experiments show that the proposed method could outperform other baselines on the SPair-71k real image benchmark.
- the authors have shown numerous visual examples to demonstrate the correspondence capability.
Weaknesses: 1. The proposed method, that "distill the information distributed across time and space from a diffusion process into a single descriptor map", seems to have more potential than correspondence tasks. Have the authors explore other tasks, like video label propagation, Homography estimation or perception tasks like classification?
2. Concurrent works: there are multiple papers presenting correspondence ability of diffusion models:
[1*] Hedlin, Eric, et al. "Unsupervised Semantic Correspondence Using Stable Diffusion." arXiv preprint arXiv:2305.15581 (2023).
[2*] Tang, Luming, et al. "Emergent Correspondence from Image Diffusion." arXiv preprint arXiv:2306.03881 (2023).
[3*] Zhang, Junyi, et al. "A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence." arXiv preprint arXiv:2305.15347 (2023).
Could the authors explain the differences if possible?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: please refer to the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: no
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. *Have the authors explored other tasks [...]?*
Please see the global response PDF for new applications in semantic appearance transfer and video mask propagation.
2. *Concurrent works: there are multiple papers presenting correspondence ability of diffusion models: [...] Could the authors explain the differences if possible?*
Indeed there are a few concurrent works that have appeared in unofficial preprints after the submission of this project. **One main conceptual difference with these concurrent works is that we aggregate features across all timesteps of the diffusion process**, which we motivate in Figure 2 of the main paper and ablate in our experimental results (Ours vs Ours - One-Step in Table 1 of the main paper, where we achieve a 10\% boost in PCK\@0.1_bbox). In contrast, these works all use a hand-selected *single timestep.* In Figure 5 of the main paper we demonstrate why heuristics for hand-selecting specific diffusion features may not be generalizable, since SDv1-5 and SDv2-1 display significantly different behavior in which combinations of layers and timesteps are the most “semantic,” as automatically learned by our aggregation network. **Finally, we also report the best keypoint matching performance on SPair-71k (64.6 PCK\@0.1_bbox) compared with 45.4 [1], 52.9 [2], 62.9 [3] PCK\@0.1_bbox respectively.**
[1] Hedlin et. al. "Unsupervised Semantic Correspondence Using Stable Diffusion." arXiv 2023. \
[2] Tang et. al. "Emergent Correspondence from Image Diffusion." arXiv 2023. \
[3] Zhang et. al. "A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence." arXiv 2023. | null | null | null | null | null | null |
Efficient Exploration in Continuous-time Model-based Reinforcement Learning | Accept (poster) | Summary: The submission considers online RL, while the true dynamics is continuous-time. To deal with this issue, the authors provide a continuous-time model-based method. The proposed method is special by
1. iteratively fitting ode-based models and deriving/learning the control from the fitted models
2. having the option to adaptively selecting the sampling timing.
The proposed method is able to well handle problems with continuous-time true dynamics, with both theoretical guarantees and empirical evidence.
Strengths: The submission is novel. The proposed method incorporates continuous-time models with adaptive sampling, which to the best of my knowledge is the first.
The submission provides thorough results with both theoretical guarantees and also empirical experiments.
The studied problem is well motivated: there are problems where the data is generated in continuous-time in need of specialized methodology like the submission.
The writing is clear and easy to follow.
Weaknesses: The submission is solid, and the adaptive MSS seems to be an important and useful technique. However, there is one concern bugging me, which seems a little crucial: it is not clear whether the continuous modeling is indeed needed.
My question is twofold: 1. whether it is necessary or practical to assume a continuous-time ground-truth dynamic; 2. whether it is necessary or helpful to use continuous-time models in the method for $f$
First of all, the continuous dynamics in section 2 and equation (1) may never be empirically achieved or used. There are very rare cases where one can really implement a policy in a continuous-manner as that equation. As a result, the policy functions considered will often time be piece-wise constant, which does not fit (1). It is not clear how this may affect the theory. When evaluating a policy, equation (1) may also need to be discretized. Therefore, in each episode, one can also fit a discrete-time model instead of a continuous-time one as suggested by the authors. I am wondering how the proposed method compares to this discrete-time version of the proposed method.
Second, the theoretical results like proposition 1 also holds for the discrete dynamics setting. The adaptive MSS may also be feasible in the discrete-time setting, which itself now is implemented in a discretized version.
Therefore, the observations above make me wonder whether the continuous-time part is really needed in the submission.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In the experiments, are the cost function collected in a discrete-time manner or continuous-time manner? More specifically, are the results summation of costs or the integral of costs over time?
2. What difference might be, if we replace the ODE model by a discrete-time model, for example, like iLQR. Please demonstrate the differences both theoretically and empirically.
3. In experiments why does the proposed method with equidistant perform better than discrete zero-order hold OC? Are the two policy implemented using the same time interval on the same grid? Or does the proposed method calculates the integral over time of costs? Or does it use a different time interval than the competing discrete-time policy? What is the definition of the discrete zero-order hold OC?
4. Is it possible to implement MSS with a discrete-time assumed model? For example, I can assume that the true model is a discrete-time model which is just same as the Euler discretized version of the ODE in the submission, and then conduct adaptive MSS like in the submission. What is the problem of this naive ablation?
5. Would it be possible to explicitly list the dynamics of the experiments? The cost function is provided in the supplements but not the dynamics. I am concerned that the considered experiments are too simple like all linear ODEs, which may make the method not general enough.
As a summary, the submission definitely has novelty and contributions. However, I feel that the current method is not demonstrated clear enough, especially on what role the continuous-time modeling is playing in the method.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and valuable feedback!
## Weaknesses
1. *Why continuous-time modeling:*
Continuous time learning has several benefits over discrete-time modeling. For example, all systems in natural sciences are continuous in nature, therefore introducing priors into the learning problem is easier in continuous time (discrete-time priors are susceptible to the choice of discretization). Furthermore, when learning discrete-time dynamics, the sampling and control frequency is fixed to the discretization of the problem formulation. For a new choice of discretization, generally one has to relearn the dynamics. In continuous time, once the ODE is learned, the learned model can be used for any discretization.
We appreciate the comment of the reviewer that real-world systems are controlled in discrete time and not continuously. In our analysis, we consider a general policy class. A special case of this policy class is piece-wise constant policy functions, which as the reviewer acknowledges are more practical (we do not explicitly include the time-dependency of the policy in eq (1) for simplicity). Furthermore, as we show in our results, we can obtain continuous-time performance with our algorithm empirically. Furthermore, we compare our algorithm with the true discrete-time models in Table 1 and show that our method outperforms the discrete-time case. Since we already outperform the ground truth discrete-time model, we did not evaluate a discrete-time model-based RL method additionally.
In summary, continuous time modeling has several advantages over its discrete-time counterpart as we discuss above. Our problem formulation also considers the more practical class of piecewise constant policy functions, and in our results, we show that continuous time modeling achieves better performance. We hope we could convince the reviewer about continuous time modeling with our response.
## Questions
1. *Cost function evaluation:*
The cost is always evaluated in the continuous-time setting. That means that no matter whether we control the system continuously or with a discrete zero-order hold, we evolve the dynamics in continuous time and compute the continuous-time cost (up to ODE solver precision).
2. *Difference between ODE vs discrete dynamics:*
We are not sure if we understood the question exactly. Here is a potential response; we compare our continuous time modeling approach to the ground truth discrete-time model in Table 1. Our experiments show that our method outperforms the discrete-time case. Learning discrete-time dynamics is a well-studied problem, where similar theoretical guarantees exist [1].
3. *Equidistant performs better than discrete zero-order hold OC:*
We refer the reviewer to the Author Rebuttal on how the costs are calculated in our experiments. As discussed in the authors rebuttal section discrete-time modeling and control are limited to the discretization schema. On the contrary continuous-time modeling benefits from representing the dynamics between the discretization nodes and adapting the controls accordingly. With equidistant MSS we take measurements equidistantly in time. However, we still learn a continuous time model and use it for continuous-time control.
From Table 1, we see that the performance of our learned continuous-time controller is better than the discretized zero-order hold controller obtained using the true dynamics.
4. *MSS for discrete-time models:*
We are not entirely sure we understood your question here. Here is a tentative answer:
MSS can be used in the setting where we can only observe the state at some fixed discretized times. However, it still requires learning dynamics in continuous time. This is because discrete-time modeling requires observing transitions $(x_{k}, u_{k}, x_{k+1})$ at each time step. Did we provide the answer you had in mind or were you interested in something else?
5. *List of dynamics models:*
All considered systems except for the Glucose in the blood system are not linear and standard in the literature. For the sake of completeness, we added them to the updated version of the paper.
Having addressed all of the questions provided by the reviewer, and given the contributions of this paper, we would appreciate it if the reviewer would increase their score for our paper. We would be happy to answer any remaining questions or concerns.
## References
[1] Curi, Sebastian, Felix Berkenkamp, and Andreas Krause. "Efficient model-based reinforcement learning through optimistic policy search and planning." Advances in Neural Information Processing Systems 33 (2020).
---
Rebuttal Comment 1.1:
Title: Follow up on rebuttal
Comment: We hope we could address your concerns adeptly. We would further like to emphasize another benefit of continuous-time modeling over its discrete-time counterpart.
Discrete-time algorithms such as H-UCRL only work for the setting where the measurement frequency and control frequency are the same. Continuous time modeling can separate these two elements. We leverage this in our work, by proposing MSS that only collects data that benefits in learning the ODE.
In summary, due to continuous-time modeling, our proposed method has more control over the system’s measurement and control frequencies, incorporates a general class of policies such as piece-wise constant policies with any choice of control frequency, and *outperforms the true discrete-time model*.
We have updated the paper to further discuss the benefits of continuous-time modeling. It is much appreciated if you could please reconsider your assessment, or respond with questions/suggestions so that we can improve the paper in this regard. | Summary: This paper proposes a continuous time framework for model based reinforcement learning. Their algorithm OCoRL solves the optimal control problem eq (1) by: 1, selecting optimistic policy. 2, rollout to collect data. 3, update model estimation and statistics. Specifically, they study the measurement selection strategy (sampling state-action in the continuous time framework), and show both theoretically and empirically the effect of different measurement selection strategy.
Strengths: The paper models the optimal control or reinforcement learning problem in continuous time (eq (1)), which is elegant and also convenient for algorithm design and analysis. The realization of state-action at discrete time steps is naturally modeled as discrete-time sampling from the continuous time trajectory (the measurement selection strategy (MSS)). The proposed algorithm alternates between: optimistic planning given current statistical model and data collection for a more accurate model. They further proposed the adaptive MSS that samples state-action time points based variance of model estimate in the state-action space (rollout trajectory).
Both theoretical and empirical analysis are provided and showcase the impact of different discrete sampling strategies.
Weaknesses: The algorithm enjoys elegance and nice theoretical properties, but it can be difficult to realize in practice. Especially, the first step in OCoRL that solves optimistic policy can be very time-consuming in practice. The authors mentioned in Appendix C that it is approximately solved by Iterative Linear Quadratic Regulator, but still it can be impractical.
The experiments mainly focus on investigating the impact of measurement selection strategies. A comparison in terms of performance: time/computation complexity and over all cost with existing approach (PPO, etc) would better help readers evaluate this approach
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive and valuable feedback!
# Weaknesses and Questions
1. *A comparison in terms of performance: time/computation complexity and over all cost with existing approach (PPO, etc) would better help readers evaluate this approach*:
As you correctly notice the first step of OCORL, namely solving optimization problem (2) from the paper is challenging. We tried to solve optimzation problem (2) using black box optimal control optimizers. In particular, we used ILQR coupled with MPC. We solved the continuous-time optimal control problem (2) by discretizing it with a small time step such that the change in the achieved cost was negligible with coarser discretization.
In Table 1, we compare our work with the ground truth continuous-time dynamics and its discretization. The baselines suggested by the reviewer (PPO etc.) operate in discrete time. Since we already compare our algorithm to the ground-truth discrete-time system, we did not include other discrete-time baselines. We would also like to highlight that the number of collected transitions is only in the range of [200, 500], this is a considerable gain in sample efficiency compared to model-free methods such as SAC or PPO. | Summary: This paper proposes a continuous-time model-based reinforcement learning method for controlling fully observed dynamical system environments where there is a cost to take a sample of the state. A Gaussian process dynamics model is used, and a novel adaptive measurement selection strategy is proposed to determine when to take a sample of the state, such that the overall optimization converges to the optimal policy in the limit of infinite trials. The proposed method (OCoRL) is theoretically analyzed to show a general regret bound holds for any measurement selection strategy and is empirically verified across a range of dynamical system environments.
Strengths: * The proposed method OCoRL and associated general regrets bound that holds for any measurement selection strategy appear novel.
* The paper is well-written and clearly laid out.
* The no-regret algorithm for nonlinear dynamical systems in the continuous-time RL setting seems widely applicable and relevant to the RL and ML community.
* The code is reproducible and easily extendable, being well documented throughout.
Weaknesses: * How does the proposed method perform when the state differential is not observed $\dot{x}_n(t)$ and has to be inferred? Could you perform an ablation of this to show that the proposed method is still practical?
* Why is the noise only added to the observed state derivatives $\dot{x}_n(t)$? Perhaps it could be more realistic to consider noise added to both the observed state $x_n(t)$ and the observed state derivative $\dot{x}_n(t)$?
* Line 118: "predicted mean and epistemic uncertainty". How does the model guarantee that you only measure the "epistemic uncertainty" and not the "epistemic uncertainty and aleatoric uncertainty"? This was not clear, and I presume the model learns both. If so, how can you split the uncertainty to only use the epistemic uncertainty, as outlined in the method?
* The adaptive MSS assumes that $m_n=\left \lceil{T/ \Delta_n}\right \rceil$, i.e., that a sample must be taken in each uniform interval $\Delta_n$ of time. This seems overly restrictive. Can OCoRL be adapted to skip taking a sample in some $\Delta_n$ of time, i.e., where they are not needed or informative?
* On this, can a further ablation be performed where $m_n$ is varied across all the environments and baselines to empirically verify the adaptive MSS claims for wide ranges of $m_n$?
* Unsubstantiated claim of "We compare the adaptive and equidistant MSSs on all systems and observe that the adaptive MSSs consistently perform better than the equidistant MSS". Table 1, shows that adaptive MSSs can perform better than equidistant MSS only in certain environments and that equidistant MSS achieves the same performance (final cost) within error bounds for the environments of Cancer Treatment, Pendulum, and Mountain Car.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Could this approach be generalized to other environments that are not implicitly defined by an ODE, such as other types of differential equations, e.g., delayed differential equations, stochastic differential equations, etc? Furthermore, can this approach work for partially observable environments?
* What is the trade-off of varying the number of measurements taken, i.e., an ablation of varying M for all experiments? Does the algorithm still hold under these settings? Can this be demonstrated empirically?
* Could the OCoRL method be benchmarked against the closest related works of Yildiz et al. (2021) and Du et al. (2020)?
* (From above): Can OCoRL be adapted to skip taking a sample in some $\Delta_n$ of time, i.e., where they are not needed or informative?
* Typo: Line 339: "Figure 1" -> "Figure 2".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are addressed with the assumptions outlined in Section 2.1.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for the positive and valuable feedback!
## Weaknesses
1. *Case when state derivatives $\\dot{x}(t)$ are not observed:*
We thank the reviewer for this very interesting question. When the state derivatives are not observed one can apply several techniques to obtain them (e.g., using finite differences, interpolation methods, etc, [1-3]). We consider the incorporation of the derivative estimation inside the RL loop as an interesting avenue for future work.
2. *Noise also in the state $x(t)$:*
The noise can be added to the states $x(t)$ as well (only as a measurement noise though). However, in the analysis, we would then need to transfer the noise from the inputs to the outputs via Taylor approximation and also update the $\sigma$ of the sub-Gaussian noise assumption. For simplicity and disposition flow we assumed noise only on the outputs of learned function, i.e. $\dot{x}(t)$.
3. *Measuring uncertainty:*
Yes, the model learns both, aleatoric and epistemic uncertainty. We split noise into aleatoric and epistemic parts and use only epistemic part for planning. In the GP case, we learn homoscedastic aleatoric uncertainty by optimizing noise variance $\sigma^2_{ale}(x) = \sigma^2$ in the negative log-likelihood term: $\frac{1}{2}\dot{\mathbf{y}}^\top (\mathbf{K} + \sigma^2I)^{-1}\dot{\mathbf{y}} + \frac{1}{2}\text{logdet}(\mathbf{K} + \sigma^2I)$, the epistemic uncertainty (epistemic variance) we obtain from the formula: $\sigma^2_{epi}(x) = k(x, x) - k(x, \mathbf{X})(\mathbf{K} + \sigma^2I)^{-1}k(\mathbf{X}, x) $. Here we denoted by $\mathbf{K}$ the covariance matrix built from the observations, and $k(x, \mathbf{X})$ is a vector of kernel evaluations between point $x$ and all observations. When we use deep ensembles for modeling dynamics, every member of the ensemble learns to predict mean and aleatoric (heteroscedastic) uncertainty (variance). To obtain (approximate) epistemic standard deviation, we take, as is commonly done, a standard deviation of the predicted means of the ensemble.
4. *Skipping measurement in* $\Delta_n$ *interval time*:
With the assumptions of the paper, we were not able to get rid of sampling in every interval $\Delta_n$ of time. However, if we for example assume some kind of stability (so that the trajectory doesn't have the option to deviate exponentially fast in time) we can alleviate this dependence. The second possible approach is to consider event-triggered sampling, where we would sample (take a measurement) only when the epistemic uncertainty at the true state would surpass a certain value. Again, for this kind of approach, we would need another assumption, namely to be able to continuously monitor the system (which one could argue can be done with hardware). Regarding changing the $m_n$ in our experiments: If we let $m_n$ be large, the difference between all MSS would become negligible. The main difference between the different MSSs arises when the number of collected data samples is small. We chose $m_n$ small enough so that the difference is visible. However, upon the reviewer's request, we performed an ablation study for different values of $m_n$ on the pendulum environment. We have attached a figure in the Author rebuttal section with our results. It is visible from the figure that for small values of $m_n$ there is a significant difference between the MSSs, while for larger MSSs this difference vanishes.
5. *Unsubstantiated claim based on Table 1:*
Thanks for spotting that, we have corrected the claim in the revised version. From the table, it is only possible to claim that adaptive MSS outperforms equidistant MSS on the Cancer Treatment, Pendulum, Bicycle, Furuta Pendulum, Quadrotor 2D, and Quadrotor 3D environments while performing on par (within the overlapping confidence sets) on Glucose in Blood, Mountain Car, and Cart Pole environments.
## Questions
1. *Generalization to other environments:*
In this work, we only consider systems driven by an ODE. Other types of systems like delayed differential equations, stochastic differential equations, and partially observable environments are still an open problem and we leave them as exciting future work.
2. *Varying number of taken measurements M:*
With a small M (small number of observations) adaptive MSS performs much better compared to equidistant MSS. When the number of measurements grows the adaptive and equidistant MSS should result in similar performance (on our experiments) since we collect enough data in all regions (important and unimportant). We refer the reviewer to the newly added figure above for an ablation study.
3. *Comparison to Yildiz et al. (2021) and Du et al. (2020):*
The setting of Yildiz et al. (2021) and Du et al. (2020) is a bit different since they don't consider access to noisy derivatives as we do. To compare with the work of Yildiz et al. (2021) we run experiments with a mean (greedy) planner (this is the planner used by Yildiz et al (2021)) instead of our optimistic one. As we can see in Figure 3 in the paper the optimistic planner is faster at finding the optimal policy.
4. *Skipping observations:*
In practice (experimentally), we can indeed skip taking the measurements in some $\Delta_n$ interval of time, if the uncertainty there is not high (in hallucination), however with this we lose theoretical guarantees.
5. *Typo:*
Thanks for spotting that, we were indeed referencing Figure 1. We updated the typo in the updated version of the paper
## References
[1] Treven, L., Wenk, P., Dorfler, F., and Krause, A. (2021). Distributional gradient matching for learning uncertain neural dynamics models. Advances in Neural Information Processing Systems.
[2] Chartrand, R. (2011). Numerical differentiation of noisy, nonsmooth data. International Scholarly Research Notices, 2011.
[3] Knowles, I. and Renka, R. J. (2014). Methods for numerical differentiation of noisy data. Electron. J. Differ. Equ, 21:235–246.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed responses. I see your point about the restrictive assumption of taking a sample in every $\Delta_n$ time interval. I am keeping my original score. | Summary: This paper introduces a novel algorithm for efficient exploration in continuous-time model-based reinforcement learning. The algorithm represents continuous-time dynamics using nonlinear ODEs and captures epistemic uncertainty using probabilistic models. The analysis shows that the approach achieves sublinear regret with significantly fewer samples, making it a promising solution for various applications. The paper also develops a practical adaptive measurement selection strategy that reduces the number of measurements per episode while retaining regret guarantees. The benefits of continuous-time modeling and planning with optimism are demonstrated in several environments. The authors aim to catalyze further exploration within the RL community regarding the potential of modeling dynamics in a continuous-time framework.
Strengths: This paper appears to be one of the pioneering works in the field of continuous-time reinforcement learning (RL) applied to nonlinear dynamical systems.
Furthermore, the utilization of epistemic uncertainty for measurement selection, a technique commonly employed in active learning but rarely explored in RL, adds a unique and valuable dimension to this study.
In addition to these notable contributions, the paper introduces several innovative techniques and investigates their efficacy across various experimental environments.
Weaknesses: The paper encompasses a wide range of techniques, including model-based RL, continuous-time RL, and aperiodic strategies. However, the abundance of new content may make it challenging to write and comprehend the relationship between each technique and the overall idea. To enhance clarity and understanding, it would be beneficial to reorganize the paper and provide clearer connections between the different techniques employed.
Regarding Table 1, it appears that, under a known true model, discrete-time control outperforms continuous-time OC without the inclusion of MSS. Consequently, it may not be fair to directly compare continuous-time OC with MSS and discrete-time control since the conditions and components differ significantly between these approaches. A more appropriate comparison could be made between continuous-time OC and discrete-time control without MSS.
To provide a more comprehensive evaluation, it is advisable to compare this method with a broader range of other proposed RL methods. The current results may not fully demonstrate the performance of OCoRL, and including additional comparisons would enhance our understanding of its capabilities.
In Figure 3, it would be beneficial to consider incorporating more environments for experimentation. Relying solely on a single environment may not provide enough evidence to draw robust conclusions. Including a broader set of environments would strengthen the validity and generalizability of the findings.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: In which scenarios are measurements considered costly? Have you discussed in the Introduction whether there are enough scenes that truly require continuous-time systems due to costly measurements?
In the problem setting, the computation of cumulative regret relies on an optimal policy. However, if the optimal policy is unattainable for a particular problem, can your method still be applied? Please provide an explanation.
Regarding Line 122, is it possible for re-calibration techniques to provide accurate uncertainty estimation while maintaining a low time complexity?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, thank you a lot for your positive and valuable feedback! We will indeed incorporate the proposed ideas in the updated version of the paper.
## Weaknesses
1. *Enhance Clarity:*
We thank the reviewer for the feedback. We have added the following summary of our method at the end of section 3:
*OCORL consists of two key and orthogonal components; (i) optimistic policy selection and (ii) measurement selection strategies (MSSs). In optimistic policy selection, we optimistically, w.r.t. plausible dynamics, plan a trajectory and rollout the resulting policy. We study different MSSs, such as the typical equidistant MSS, for data collection within the framework of continuous time modeling. Furthermore, we propose an adaptive MSS that measures data where we have the highest uncertainty on the planned (hallucinated) trajectory. We show that OCORL suffers no regret for the equidistant, adaptive, and oracle MSSs.*
2. *Comparisons in Table 1:*
Note that the numbers in the table represent the cost, so the smaller the better. Hence continuous time OC under the true model is the best (lowest) value any controller can achieve (up to numerical calculations). Since in the discrete-time control setting, we are only allowed to change the controller at equidistant times, the cost (computed on the realized continuous trajectory) is strictly larger (worse).
3. *Comparison to other methods:*
We could also compare it to any other discrete-time algorithm SAC, PPO, etc., but since our algorithm controls the systems continuously, it outperforms even the best possible discrete-time control problem (i.e., the case when we know the true dynamics and can solve the optimal control problem with any solver) we decided to show the comparison only to the best possible control in discrete time (with given discretization).
4. *Further experiments on optimism vs greedy exploration*
The comparison between greedy and optimistic planning has been done in several related works, e.g. in [1], where they show that optimistic planning performs better than greedy planning, especially in environments with scarce rewards or large action penalties. Since this is a well-studied problem and not within the focus of our paper we decided not to add any additional environments for the demonstration due to space restrictions. If the reviewer still insists on other environments we can add them to the final version.
## Questions:
1. *Costly Measurements:*
There is plenty of scenarios when taking measurements is costly, in the introduction we consider a case when a patient is coming to the doctor for medical checks. We don't want him to come too often (every appointment at the doctor is very costly), but only when it is necessary (the uncertainty about the development of the disease is large). Another instance can be found in wireless control systems, where there are constraints on energy, computation, and communication capacity. In such systems, communication should take place only when there's pertinent information to share [2].
2. *Unattainable optimal policy:*
Not sure that we understand the second question exactly, please let us know if the following is what you meant: It can happen that the solution to the optimal control problem is not attainable (in a sense of inf, but the minimum is not attained). For the analysis, we assume the minimum is attained. Just for running the algorithm, we don't need that assumption.
3. *Recalibration techniques:*
When Modelling dynamics with GPs we don't need to recalibrate the model since GPs provide statistically sound confidence sets by design (with the right value of $\beta_n$). For the recalibration of an Ensemble of Neural Networks, we need to solve $d_x$ (output of a neural network) optimization problem over a scalar variable, which can be solved very fast in practice, in our experiments recalibration usually takes only around 10ms.
## References
[1] Curi, S., Berkenkamp, F., and Krause, A. (2020). Efficient model-based reinforcement learning through optimistic policy search and planning. Advances in Neural Information Processing Systems, 33:14156–14170.
[2] Anta, Adolfo, and Paulo Tabuada. "To sample or not to sample: Self-triggered control for nonlinear systems." IEEE Transactions on automatic control 55.9 (2010): 2030-2042.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. Considering your rebuttal and comments of other reviewer, I stick to the current score. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable and useful feedback. We believe there is been some confusion around the discrete-time control setting we consider in our work. Accordingly, we have clarified this further in the appendix of the updated paper. We include a summary below:
1. When we learn a system that follows the ODE $\dot{x} = f^*(x, u)$ in discrete time, we can only learn the discretized dynamics of the system, i.e., $x_{k+1} = f(x_k, u_k)$. This approach is severely limited to the choice of discretization.
2. For the learned discretized system $x_{k+1} = f(x_k, u_k)$, we *cannot model (predict) the behavior of the system between two time steps.* Therefore, our control inputs can only be changed at the discretization frequency.
3. Nonetheless, the underlying system is still evolving continuously over time. Hence, even if we model and control the system with the discrete model, we evaluate the state evolution and the corresponding cost with an ODE integrator, where the control is changing at the discretization frequency.
Lastly, we have attached a pdf of an additional experiment that studies different measurement selection strategies as requested by the reviewers. We included the results of the additional experiment in the appendix of the updated version of the paper. We would be happy to further update the paper if there are any remaining questions or feedback from the reviewers.
Pdf: /pdf/1679327d377108d9f30873ba97feedb039f07a9e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Nonparametric Teaching for Multiple Learners | Accept (poster) | Summary: This paper extends nonparametric teaching from the setting of teaching each learner independently to teaching multiple ones simultaneously. The method is about teaching a vector-valued model, which improves over existing methods when multiple learners can communicate with each other. There are both theoretical and experimental results validating the effectiveness of this method.
Strengths: The paper seems to have solid theoretical analysis; it's hard for me to judge as a non-expert.
Weaknesses: Only one experiment on the RGB channels of images is shown in the paper. Another experiment in a different setting may help demonstrate the generalizability of the approach. As someone with no expertise in this area, I'll defer to others on the technical side of things.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you elaborate on what other possible applications can benefit from the proposed approach?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments! We respond in detail to your specific concerns in the following.
**Q1**: Due to the page limitation, we have provided two experiment results in the main paper. We also show additional performance evaluations of MINT under various settings and presented detailed demonstrations in the appendix. These include testing MINT with a specific initialization of $f^0$ in RGB teaching tasks, comparing RFT with GFT using channel-wise visualization, and conducting experiments with Synthetic Bivariate Mixture Gaussian data. All of these experiments align with our theoretical findings that highlight the superior efficiency of communicated MINT over the vanilla approach, which in turn outperforms single-learner teaching. We will improve our presentation to note our additional experiments in appendix.
**Q2**: This theoretical work has the potential for application in the field of knowledge distillation [g], where the teacher (cumbersome) model is to transfer knowledge to the learner (small) model by sharing “soft targets”. The idea presented in this work could serve as inspiration for future investigations into distilling the knowledge of a teacher to multiple learners. Such a knowledge transferring framework, in turn, can open up new avenues of exploration and possibilities across different domains, such as computer vision [h], Internet-of-Things [i], natural language processing [j] and decision-making tasks [k]. In the revision, we will expand upon these potential applications by providing additional discussions to delve deeper into their implications.
[g] Hinton et al. Distilling the knowledge in a neural network. NeurIPS 2014 Deep Learning Workshop.
[h] Wang et al. Gradient-based algorithms for machine teaching. CVPR 2021.
[i] Xu et al. Locality sensitive teaching. NeurIPS 2021.
[j] Li et al. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. ICML 2022.
[k] Yengera et al. Curriculum Design for Teaching via Demonstrations: Theory and Applications. NeurIPS 2021. | Summary: The paper extends non-parametric machine teaching to the case of multiple learners. In particular, each learner learns one component of a vector-valued function. The authors consider the case where learners have no communication with each other and the case where there is "communication" via a matrix transformation of the outputs of each individual learner.
Strengths: Originality: I am not an expert in machine teaching so I can't say how original the contribution is relative to prior work.
Quality: The paper appears to be of good quality, though I did not check the proofs carefully.
Clarity: The paper was relatively clear, though with key exceptions referenced below
Significance: The paper appears to me to be an incremental step beyond the non-parametric single-learner setting.
Weaknesses: I believe the paper needs more motivation and needs to contrast with the case where a single learner is learning a vector-valued function. Currently, the paper extends the scalar-valued single-learner setting to one where the teacher is teaching multiple learners---each outputting one component of a vector-valued output. In the introduction, the paper explains why teaching multiple learners at once would be better than teaching multiple learners separately. That seems quite clear, but then why not just teach one single learner that is learning a vector-valued function rather than multiple learners learning each component separately? The introduction seems to hint at computational issues but this should be made more explicit, and if computational issues are a critical motivation, then experiments showing the computational advantage should be included.
Especially because the paper also includes a section on "communication" between the learners and shows that they achieve better performance when communication is allowed. (By the way, the communication, in this case, is just a matrix that the teacher gives each learner.. the learners aren't learning to communicate with each other, so it seems clear from the outset that this would lead to improvement) So if communication leads to higher performance, then why not full "communication", i.e., one learner that outputs all components?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I thought the experiments section could be made clearer (I also looked at the additional details in the appendix but was still confused). What is the domain and range of the functions being learned? What are the examples (x and y) that the teacher is giving for each settting? Are they pixels?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I ask that the authors replace the Lenna image with any other image. There is no reason that this paper needs to use, of all images, the Lenna image, a cropped photo of a nude women from a Playboy centerfold. The continued unnecessary use of this image perpetuates an uninclusive environment.
https://womenlovetech.com/losing-lena-why-we-need-to-remove-one-image-and-end-techs-original-sin/
https://www.washingtonpost.com/opinions/a-playboy-centerfold-does-not-belong-in-tj-classrooms/2015/04/24/76e87fa4-e47a-11e4-81ea-0649268f729e_story.html
Lena herself has said that she no longer wants her image used: “I retired from modelling a long time ago,” said Lena in a new documentary film called Losing Lena. “It’s time I retired from tech, too. We can make a simple change today that creates a lasting change for tomorrow. Let’s commit to losing me.”
Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the useful comments. We are deeply appreciative of the reviewer’s efforts to help us improve our paper. We take all comments seriously and try our best to address every raised concern. We sincerely hope that our response can resolve your concerns. Any follow-up questions are welcome.
**Q1**: Our motivation for teaching multiple learners, with each learner focusing on learning a separate component of a vector-valued function, stems from the need to align with the scenario described in [c], where a single learner is taught a scalar-valued target model or function. Additionally, teaching multiple learners is an important problem under exploration in machine teaching [a-b]. Interestingly, the mathematical framework of analysis for teaching a single learner that learns a vector-valued function and teaching multiple learners, each focusing on learning a specific component, should be essentially consistent. Both approaches address the question of how to effectively teach a vector-valued function under the framework of vector-valued functional optimization. The difference lies in the multi-learner approach, where each learner is connected to a specific component of the vector-valued function, whereas in the case of a single learner, the entire vector-valued function is connected to that learner.
In this work, “computationally wasteful” is used to describe that a larger number of iterations are required to achieve convergence for single-learner teaching than multi-learner one. We also illustrate the benefits of multi-learner teaching over single-learner teaching in terms of the loss plot, as demonstrated in figures such as Fig.3 and Fig.10. We will make a clearer presentation in the revision.
**Q2**: “one learner outputs all components” may be a setting in the case where a single learner learns a vector-valued function and it makes sense within that context. Since the setting in this work is aligned with that in [c] where a single learner is taught a scalar-valued target function and our focus in this work is on teaching multiple learners, communication occurs among multiple learners. From a mathematical perspective, these two forms of communication should be essentially consistent since they both involve analyzing the relationships among the components of vector-valued target functions.
**Q3**: In line 322, we provide an explanation that a grayscale image can be visualized as a three-dimensional surface, where the z-axis represents the level of gray, and the x and y axes indicate the pixel coordinates [c]. The domain is determined by the x and y values, which represent the pixel locations, while the range is represented by the z values, indicating the gray levels. In the case of RGB images, each color channel can also be visualized as a three-dimensional surface, similar to grayscale images. Again, the domain corresponds to the pixel locations, and the range represents the corresponding color values. As for Synthetic 1D Gaussian data, the domain is [-14,14], while the range corresponds to the values generated by the Gaussian distribution $\mathcal{N}(x; 0, 5^2)$. We will further polish the presentation in the revision.
**Q4**: The Lenna image used in Figure 1 and the experiments conducted in this theoretical work is merely an example of an RGB image, which we consider to be a well-known image in computer vision. As our attention mainly lies in theoretical machine learning community, we acknowledge that there is a possibility of us not being fully informed about all the news concerning Lenna. We apologize for any concerns caused by our lack of awareness regarding recent news about Lenna. Rest assured, we will replace the image with a different RGB image in the revision, while ensuring that it does not impact the theoretical findings presented in this work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. Unfortunately, I don't think this rebuttal addressed my concerns with the motivation as the response mainly referenced the fact that others are working on it, e.g. "stems from the need to align with the scenario described in [c]" or "additionally, teaching multiple learners is an important problem under exploration in machine teaching [a-b]", but still does not provide a motivation for the scenario in the first place.
---
Reply to Comment 1.1.1:
Comment: Thanks a lot for the additional response. We apologize for not being clear enough. We are more than happy to address them. We sincerely hope that our response can address your concerns.
Indeed, such two cases are similar, and they are two different standpoints of considering the question of how to effectively teach a vector-valued function under the framework of vector-valued functional optimization.
Teaching multiple learners presents a more flexible scenario where the teaching for each learner can conclude upon learning a component of the vector-valued target function. In contrast, teaching a single learner who learns the entire vector-valued function can only terminate once all components of the vector-valued target function are learnt, with the efficiency being determined by the worst-case scenario.
Besides, the multi-learner setting offers a general framework that can be generalized to more complicated scenarios. For instance, the cases where each learner operates within a different feature space would be impractical with a single-learner setting that teaches a single learner who is learning a vector-valued function. Specifically, this multi-learner framework enables to capture the diversity, as each individual learner is responsible for learning a component of the vector-valued target function, a capability that is limited in a single-learner setting. On the other hand, when the feature space is consistent across all learners, these two cases become interchangeable, demonstrating that the single-learner setting is a special case within the broader multi-learner setting.
In practice, teaching multiple learners is a very common scenario where the interplay and trade-off among multiple learners are taken into consideration. In this setting, the efficiency of teaching multiple learners needs to be studied -- whether the convergence speed-up from the single-learner teaching holds is particularly important. More broadly, as optimal education is one of the most important motivations for machine teaching, multi-learner teaching brings the machine teaching research closer to the reality.
---
Reply to Comment 1.1.2:
Comment: We sincerely thank you for your comments on our submission. We have taken great care to address each comment in detail in our rebuttal.
Just a warm reminder that the discussion period is drawing to a close on Aug 21st at 1 pm EDT. We would greatly appreciate it if you could acknowledge receipt of our further responses and inform us whether we have addressed your concerns. We are eager to engage in any further discussions if needed. Once again, thank you for your valuable feedback. | Summary: The paper studied nonparametric teaching in the presence of multiple learners. Following prior works on nonparametric teaching, the paper extended to a scenario where multiple learners simultaneously learn a separate component of the joint model. The paper first analyzed the performance of the Random Functional Teaching (RFT) and the Greedy FT (GFT) strategy and showed that both teachers can successfully guarantee reduction in the loss function when the learners are completely independently, and no communication happens. Secondly, the authors studied the effect of communication between learners and show that an affine transformation of the joint model does not increase the loss but can significantly enhance the loss reduction in the beginning of the training phase. Experiments validated the discoveries in this paper.
Strengths: (1). The problem of teaching multiple learners itself is interesting and underexplored in the machine teaching community. This paper pushes the frontier of machine teaching in this aspect.
(2). The paper theoretically analyzed the loss reduction of RFT and GFT teacher, and also the benefit of communication. The results show that non-parameter teaching with communication can indeed help with loss reduction.
(3). The paper performed extensive empirical study of the teaching strategy, and the results are convincing.
Weaknesses: (1). The assumption in the theoretical study is not clearly stated in the paper. For example, the theory seems to rely on the fact that the loss function must be convex, and this information should be made more prominent upfront. The authors may want to discuss the applicability of both the theory and the methodology developed in this paper. Does it apply with neural network model or only convex learners?
(2). One thing missing from the discussion is that how does the teaching performance compare to vanilla learning. The non-parametric teacher only guarantees that the loss is always reduced. However, how does the loss reduction compare to standard learning process without teaching is an important topic to be discussed.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: (1). What is the applicability of the theoretical results and the methodology developed in this paper? Is convexity a required property?
(2). How does non-parametric teaching compare to vanilla learning without teaching. Does it help accelerate the learning process?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the encouraging comments. We sincerely thank the reviewer's efforts for helping us improve the paper. We hope that our response resolves your concerns.
**Q1**: Thanks for pointing it out. We introduce the convex loss assumption above Eq.(7), and in the revision, we will make sure to highlight prominent assumptions earlier. Currently, our results are derived based on convex learners, which serves as a crucial step towards non-convex cases (e.g., neural networks). And exploring the convergence conditions for neural networks is an intriguing avenue to pursue. For instance, investigating the condition under which neural networks converge to critical points [f] could involve employing non-convex optimization techniques.
Regarding applicability, one potential application of this work could be in the field of knowledge distillation [g], where the teacher (cumbersome) model is to transfer knowledge to the learner (small) model by sharing “soft targets”. The idea in this study may serve as inspiration for future research on distilling a teacher's knowledge to multiple learners. Such knowledge transferring paradigms, in turn, can offer new insights and possibilities in various domains, including computer vision [h], Internet-of-Things [i], natural language processing [j] and decision-making tasks [k]. In the revised version, we will provide additional discussions to further explore these potential applications.
[f] Diakonikolas et al. Sever: A robust meta-algorithm for stochastic optimization. ICML 2019.
[g] Hinton et al. Distilling the knowledge in a neural network. NeurIPS 2014 Deep Learning Workshop.
[h] Wang et al. Gradient-based algorithms for machine teaching. CVPR 2021.
[i] Xu et al. Locality sensitive teaching. NeurIPS 2021.
[j] Li et al. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. ICML 2022.
[k] Yengera et al. Curriculum Design for Teaching via Demonstrations: Theory and Applications. NeurIPS 2021.
**Q2**: In essence, random functional teaching (RFT) employs a random sampling strategy and serves as a straightforward baseline that can be considered as the functional counterpart of stochastic gradient descent. As it randomly selects examples, RFT can also be seen as a form of "learning without teaching". On the other hand, the greedy functional teaching (GFT) teacher adopts a greedy approach by selecting examples that maximize the gradient. Through theoretical and empirical analysis, we have demonstrated that GFT outperforms RFT in terms of efficiency. In the revision, we will provide additional explanations to better illustrate these points.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My questions are addressed in the rebuttal. The paper is technically solid and above average, so I decided to keep my current positive score. | Summary: This paper investigates the iterative machine teaching problem under the non-parametric learner setting with vector-valued target models, also known as multi-learner nonparametric teaching (MINT). The authors consider two teaching strategies: Random Functional Teaching (RFT) and Greedy FT (GFT). The authors first theoretically analyze the convergence behavior induced by the teaching strategies in the vanilla MINT setting, where there is no communication between the learners. Then, they study the teaching strategies in the communicated MINT setting. Finally, they empirically compare the effectiveness of different teaching strategies.
Strengths: The paper is overall well-written, and the related work is extensively discussed.
The theoretical results in this paper seem correct; I haven’t checked the details of the proofs.
Weaknesses: The availability of a powerful teacher with a vector-valued target model needs to be motivated with more realistic practical scenarios.
The novelty of the contributions of this work in comparison to “Nonparametric Iterative Machine Teaching (NIMT)”: the problem formulation/setup, RFT/GFT teaching strategies are extensions from the NIMT paper. Thus, it is important to clearly discuss how non-trivial these extensions are and how different the proof techniques are from those of the NIMT paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The theoretical results in the paper are based on a synthetic example generation setting. If the teacher is restricted to choosing examples from a pool, that would result in different examples than the ideal one. Then, what would be the impact on the results?
Given that the teacher can freely synthesize examples to guide the learner toward a target model, can the results discussed be extended to learners with non-convex loss functions?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper is of an algorithmic/theoretical nature and does not have any direct potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the useful comments. We are deeply appreciative of the reviewer’s efforts to improve our paper. We take all comments seriously and try our best to address every raised concern. We sincerely hope that our response resolves your concerns.
**Q1**: An important problem towards realistic application in machine teaching lies in classroom teaching [a-b], where a teacher is responsible for teaching multiple learners. One motivation behind this work comes from exploration of such realistic multi-learner teaching, and we conduct investigation of multi-learner teaching based on nonparametric teaching. Specifically, we generalize the model space from space of scalar-valued functions to that of vector-valued functions. We will add more discussion on realistic practical scenarios in the revision.
[a] Yeo et al. Iterative classroom teaching. AAAI 2019.
[b] Zhu et al. No learner left behind: on the complexity of teaching multiple learners simultaneously. IJCAI 2017.
**Q2**:
- In terms of contribution, we extend single-learner nonparametric teaching [c] to general multi-learner one. We achieve this by formulating multi-learner teaching as teaching of vector-valued functions under the framework of vector-valued functional optimization. Considering the correlation between the components of a vector-valued function has the potential to enhance the efficiency of teaching, we also investigate communicated teaching scenarios where multiple learners can execute linear combination on the currently learnt functions of all learners, which is more practical and non-trivial.
- From a technical standpoint, we explain the difference in proof techniques between this work and single-learner teaching [c] in line 219 and 245. Specifically, we introduce the expectation operation over random sampling, which allows us to average out the impact of randomness. This also enables us to quantify the difference between RFT and GFT by introducing the distance between $ {x^t_i}^∗$ and $\mu_i$ in Theorem 10, line 280, which is not considered in [c].
- In experiments, we conduct investigations in more comprehensive multi-learner teaching settings (e.g., RGB images and bivariate mixture gaussian data), demonstrating the effectiveness of multi-learner teaching and expanding the potential applications of nonparametric teaching.
We will further polish the presentation to highlight these in the revision.
[c] Zhang et al. Nonparametric iterative machine teaching. ICML 2023.
**Q3**: The teaching ability of pool-based teachers is limited due to their constrained knowledge domain, which is a subset of that of synthesis-based teachers. This constraint can result in the learner converging to a suboptimal ${f^*}’$, as mentioned briefly in line 190. We will add more discussion in the revision.
**Q4**: Very inspiring question! In this work, the analysis of the loss function is conducted under the assumption of convexity, which serves as a stepstone towards handling nonconvex scenarios. It would be interesting to theoretically investigate the convergence performance and additional conditions of convergence for non-convex loss, as they may vary depending on the specific task at hand.
For instance, we can potentially treat a non-convex loss locally as a convex one [d], enabling us to apply the methodology developed in this work straightforwardly. By doing so, we can analyze the local convergence globally. Additionally, it might be a potential extension to make use of convex relaxation technique [e] to transform a non-convex problem into a convex one for handling nonconvex problems.
[d] Razaviyayn et al. Parallel successive convex approximation for nonsmooth nonconvex optimization. NeurIPS 2014.
[e] Xie et al. Orthogonality-promoting distance metric learning: convex relaxation and theoretical analysis. ICML 2018.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and for addressing my concerns. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Efficient Activation Function Optimization through Surrogate Modeling | Accept (poster) | Summary: The paper presents a new method for improving the performance of neural networks through the design of optimal activation functions. The authors created benchmark datasets by training convolutional, residual, and vision transformer architectures with systematically generated activation functions. They then developed a new surrogate-based method for optimization, which uses the spectrum of the Fisher information matrix and the activation function's output distribution to predict performance. The method was tested on CIFAR-100 and ImageNet tasks, and the results showed significant improvements in accuracy.
Strengths: 1. This paper introduces an innovative approach to enhancing activation functions, surpassing existing techniques in both efficiency and effectiveness.
2. The paper is exceptionally well-written, and the experiments conducted are notably thorough.
3. The benchmark datasets created by the authors provide a foundation for future research on activation function properties and their impact on performance.
Weaknesses: None
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer sps3**
Thank you for the review. Please let us know if you have any questions that we can address in the upcoming author-reviewer discussion period. | Summary: This paper introduces three benchmark datasets created by training CNN, ResNet, and ViT architectures using a set of activation functions generated from a three-node computation graph that combines unary and binary operations.
The benchmarks serve to showcase the efficacy of utilizing the 2D UMAP of the Fisher information matrix (FIM) spectrum and/or activation outputs as a cost-effective surrogate for predicting activation performance. Leveraging the 2D feature representation, an efficient activation optimization method, AQuaSurF, is developed by employing regression techniques to model activation accuracy across the 2D feature space, requiring only 100 function evaluations. The benchmark results further demonstrate the effectiveness and statistical reliability of this approach.
The proposed method is successfully applied to various vision tasks, where the discovered activation functions consistently outperform existing baseline activations. Moreover, the top activations identified through this search exhibit successful transferability to a new vision task.
Strengths: The paper is well-written and easy to follow.
The approach of utilizing the UMAP embedding of the FIM spectrum with activation outputs to assess activation performance is novel and interesting.
In contrast to previous methodologies that relied mostly on evolutionary algorithms and required thousands of function evaluations, the method proposed in this work demonstrates efficiency by outperforming baselines with just 100 function evaluations.
Furthermore, the benchmark datasets introduced in this work, may potentially help accelerate research on activation optimization.
Overall, this paper offers valuable insights for assessing activation performance and also introduces a more efficient methodology for activation optimization.
Weaknesses: In Section 6, the authors apply their proposed method to more challenging datasets and a larger activation search space, compared to those used to create the benchmarks. To further evaluate the effectiveness of the approach it would be beneficial to apply the method (KNR on UMAP embeddings) to vision tasks involving new network architectures as well.
While the chosen baseline activations in Table.1 already include ReLU and Swish, used in the original three architectures studied in the paper, in order to further strengthen the results it would still be advantageous, and perhaps straightforward, to extend the list of baselines at least to those used in PANGAEA, including GELU, LeakyReLU etc.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1- In the first paragraph of page 3 the authors observe, based on the scatter plots in Fig 2, that "best results come from discovering functions specialized to individual tasks". However, upon comparing the upper-left and lower-right corners of the plots with the upper-right region it appears that the best functions on one task also transfer effectively to, and are potentially among the best on, the other task. Is this interpretation correct?
2- In the middle row of Fig 4. The UMAP depends only on the activations and not the model. However, there appears to be differences in the distribution of points in the 3 plots (and also compared to Fig.3). Is this because of filtering out failed activations and possible rescalings / reflections of the space? A brief comment on this would enhance clarity for readers.
3- On lines 228-229 of the manuscript "Thus, activation functions are embedded close to each other in this space if they have similar shapes, if they induce similar FIM eigenvalues, or both", considering that the metric on the union of the representations is the sum of the metrics on the individual representations, then shouldn't the activations be close to each other only if they have both similar shapes and similar FIM eigenvalues?
4- How does AQuaSurF compare with PANGAEA in terms of performance? given the partial similarity of the search spaces, is it possible to make a direct comparison between the two methods (e.g. by limiting the space to non-parametric functions)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are partly addressed in the Future Work section in the appendix. There are no concerns regarding negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer x7yQ**
---
> It would be beneficial to apply the method (KNR on UMAP embeddings) to vision tasks involving new network architectures as well.
This is a great idea. We included CNN, ResNet, and ViT models in the paper to cover a wide range of possible architectures and would be happy to add additional comparisons to the revision. We also made the AQuaSurF code publicly available and spent additional effort to write documentation and ensure that the code is easy to use, so we hope that additional comparisons in the future can be run by the community as well.
---
> In order to further strengthen the results it would still be advantageous, and perhaps straightforward, to extend the list of baselines at least to those used in PANGAEA, including GELU, LeakyReLU etc.
This is an excellent idea, and not only because it provides for more comparisons, but because it is not straightforward. In order to conduct the experiment properly, we need to provide the surrogate with the performance of GELU, LeakyReLU, and any other activation functions we choose to compare against. This extra information will naturally influence the surrogate’s predictions, and so we need to restart the search from scratch in order to make the comparison fair. Thus, such comparisons raise an interesting question: How much does the performance of the surrogate depend on the number of initial activation functions it is given? We are excited to run this experiment and will add it to the final revision.
---
> It appears that the best functions on one task also transfer effectively to, and are potentially among the best on, the other task.
Good observation. There are indeed some activation functions that perform well across multiple architectures. However, note that the best functions are specialized to a specific task. Note also that Figure 2 only shows the distribution of accuracies for activation functions in the benchmark datasets. When searching in larger spaces (as was done in Section 6), we do not know what the distribution of accuracies looks like. The most important qualitative insight from Figure 2 is that specialized activation functions do exist, and so we should exploit this fact when searching for functions in more open-ended search spaces. We will clarify this point in the main text.
---
> In the middle row of Fig 4. The UMAP depends only on the activations and not the model. However, there appears to be differences in the distribution of points in the 3 plots (and also compared to Fig.3). Is this because of filtering out failed activations and possible rescalings / reflections of the space?
Yes! This is precisely what is happening. Thank you for reading the paper so carefully – this is an extremely subtle point. Indeed, the plots in the middle row of Figure 4 in principle should be the same, because they do not depend on the model. They are different because the activation functions filtered out due to invalid eigenvalues (Figure 1) are in fact different across architectures. Furthermore, UMAP is a stochastic algorithm, so even though there is substantial overlap in the activation functions it is embedding, the final results have small variations between them.
In fact, if you look closely, you can actually see the “rescalings / reflections” of the space that you hypothesized. In the middle row, Act-Bench-CNN and Act-Bench-ResNet are nearly perfect mirror images of each other. You can see this in the arrangement of the overall points, but also with the embedding locations of the labeled activation functions ELU, -ELU, tanh, -tanh, abs, and -abs. The Act-Bench-ViT plot appears different and has a few small clusters of purple points in the edges of the embedding space. These are activation functions that were not filtered out for Act-Bench-ViT but were filtered out in the other tasks. Indeed, if you remove these points, the Act-Bench-ViT embedding space becomes almost identical to the Act-Bench-CNN one (and is a mirror image of the Act-Bench-ResNet one).
We will clarify these points in the revision. Again, thank you for reading the paper so carefully. This is an extraordinarily good insight, and we appreciate that our hard work is being given such a careful review.
---
> Considering that the metric on the union of the representations is the sum of the metrics on the individual representations, then shouldn't the activations be close to each other only if they have both similar shapes and similar FIM eigenvalues?
What you are describing would correspond to an intersection of the representations, but we took a union of the representations. So, activation functions are embedded close to each other if they have similar shapes, similar FIM eigenvalues, or both. We tried the intersection approach but found the union of the representations to be more effective. We will clarify this point in the main text. (See https://umap-learn.readthedocs.io/en/latest/composing_models.html for more details.)
---
Rebuttal Comment 1.1:
Comment: I appreciate the Authors' response and clarifications. Incorporating these insights into the paper will definitely enhance its readability.
Given the current state of the paper, I would keep my rating of 7. However, I believe demonstrating that the proposed method, including the choice of regression algorithm and embedding dimension 1) works on a model other than those used for the benchmarks, and especially that 2) the method can discover activation functions that outperform other baseline activations, even if by adding the baseline activation to the list of initial activations, would further demonstrate the strength of the method and improve the quality of the paper.
Regarding the generalizability concern raised by reviewer UeiT, I respectfully hold a different perspective. Activation functions are part of the network architecture which can be tailored by human experts for a particular task, just like any hyperparameter which is optimized on a validation set, and therefore this shouldn't be considered as overfitting. | Summary: This paper addresses the optimization of activation functions in neural networks for improved performance in machine learning tasks. The authors create benchmark datasets and propose a surrogate-based optimization method based on a characterization of the benchmark space. They apply this method to discover better activation functions in CIFAR-100 and ImageNet tasks, showcasing its practical effectiveness.
Strengths: 1. The authors create benchmark datasets (Act-Bench-CNN, Act-Bench-ResNet, and Act-Bench-ViT) by training various architectures with numerous activation functions.
2. The paper presents a novel surrogate-based optimization method that characterizes activation functions analytically. By utilizing the Fisher information matrix's spectrum and activation function output distribution, a low-dimensional representation is created.
3. The proposed method, AQuaSurF, efficiently discovers improved activation functions in CIFAR-100 and ImageNet tasks, surpassing previous approaches in terms of evaluation efficiency.
Weaknesses: 1. The motivation and definition of using "Activation Function Outputs" as feature in Section 3 is not clearly explained.
2. In Table 1, some widely used human-designed activation functions, such as ELU, ReLU, and Swish, consistently achieve top performance on various tasks with different networks. However, the top activation functions discovered by the proposed method vary across tasks and networks. This suggests a limited generalizability of the searched activation functions. In other words, when faced with a new task or utilizing a new network, the activation function needs to be searched again. Furthermore, this also implies that the searching method may overfit the specific task and network, rather than finding activation functions that are generally effective and meaningful.
3. Related to the previous point, the design of the search space appears overly complicated, which also raises concerns about overfitting. As observed, the top activation functions discovered through the search process often involve complex combinations of existing human-designed activation functions. This complexity reduces their interpretability. Human-designed activation functions, on the other hand, are typically well-reasoned and supported by theory or hypotheses, allowing them to generalize effectively across tasks and networks. However, the searched activation functions are difficult to explain in terms of why they exhibit certain characteristics, and they lack generalizability.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Weaknesses 2 and 3 raise concerns regarding the significance and necessity of the proposed problem and solution. If the authors are unable to address the issues of generalizability, I would be inclined to view their "improvement" as overfitting to specific tasks and networks.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 1 poor
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer UeiT**
---
> The motivation and definition of using "Activation Function Outputs" as feature in Section 3 is not clearly explained.
The intuition behind using activation function outputs as a feature is that we expect activation functions with similar shapes to have similar performance. From one perspective, Equation 3 quantifies the difference between two activation functions’ output distributions at initialization. But from another point of view, Equation 3 is computing the pointwise distance between two activation function shapes, giving extra weight to the middle regions near x=0 where the activation functions are more likely to be utilized.
In the revision, we will explain this motivation for activation function outputs, and will clarify how Equation 3 implements this idea. Thanks for pointing it out.
---
> The top activation functions discovered by the proposed method vary across tasks and networks.
General activation functions like ELU, ReLU, and Swish are useful for achieving good performance in many tasks. However, in some tasks it is worth spending extra effort in order to achieve the absolute best performance. Customization can provide such an improvement. AQuaSurF is a way to discover customized activation functions that improve performance over the general-purpose baseline solutions in such tasks.
---
> When faced with a new task or utilizing a new network, the activation function needs to be searched again.
Yes, and this process allows taking advantage of customization. With previous techniques it was infeasible to perform such a new search for every task, but with AQuaSurF it is possible. We hope that future work will build on the contributions in this paper, including the benchmark datasets and the code, and improve the efficiency even further.
Note also that the best activation functions discovered often successfully transfer to new tasks and improve performance. This is especially useful for challenging tasks such as ImageNet (Table 2).
---
> This also implies that the searching method may overfit the specific task and network.
Customization means finding an activation function that works as well as possible in the given context, i.e. architecture and task. The result may not work as well in another context---and that is precisely where the power of customization lies. While it is certainly possible to discover solutions that are general and apply to many contexts, they are essentially leaving money on the table. AQuaSurF provides a method for doing such customization separately for each context, thus taking advantage of any possible performance improvement.
To avoid overfitting, we use standard techniques: the networks are trained on the training set, the activation functions are evaluated on a held-out validation set, and final performance is measured on the test set.
---
> The top activation functions discovered through the search process often involve complex combinations of existing human-designed activation functions.
This is actually an advantage of using an automated search process: It is possible to use AQuaSurF to build on any human ideas, i.e. refine and combine them, as well as augment them with entirely new designs. Such solutions can be much more complex than the original human designs; it is thus possible to discover powerful activation functions that humans are not likely to discover on their own.
---
> The searched activation functions are difficult to explain in terms of why they exhibit certain characteristics.
This was true of previous work like PANGAEA and Swish, but this paper actually makes key contributions in understanding what properties make an activation function effective. The two features the surrogate model uses are informative: Activation function outputs describe how the function modifies the forward-propagated signals before training begins, and FIM eigenvalues describe the curvature of the loss surface at initialization. The paper thus suggests that we should not limit ourselves to only using activation functions that have a simple written form---properties such as function outputs and FIM eigenvalues matter more. Based on these observations, in the future it may be possible to develop a general theory of what makes activation functions effective.
To support this effort in practice, the benchmark datasets Act-Bench-CNN, Act-Bench-ResNet, and Act-Bench-ViT, as well as the AQuaSurF software, will be a powerful resource. They already made it possible for us to identify function outputs and FIM eigenvalues as useful predictors of performance; we expect that in the future they will be useful for the community to further theoretical understanding as well as practical development of activation functions.
---
Rebuttal 2:
Title: Update after rebuttal
Comment: It is unfortunate that the authors' rebuttal did not address my concerns.
1. Firstly, the author's response did not effectively address the concern regarding overfitting. If the functions found during the search on a particular task or model cannot be generalized to other tasks or models, then it constitutes a form of overfitting. This so-called "customization" lacks practical significance and does not offer new insights for academic research.
2. Secondly, taking into account the opinions of Reviewer N57C and Reviewer x7yQ, I am more inclined to agree with N57C. The improvement brought about by this costly "customization" is extremely marginal.
I will keep the rating as 3. Reject.
---
Rebuttal Comment 2.1:
Comment: **Response to Reviewer UeiT**
Thank you for taking the time to respond. We strongly disagree with your assessment and have responded to each of your points below.
First, stating that the functions “cannot be generalized to other tasks or models” is a complete misrepresentation of the paper. Table 2 provides a direct contradiction to this statement: It shows that all nine of the activation functions discovered successfully generalized to a new task: ResNet-50 on ImageNet.
Second, stating that customization is “overfitting” and “lacks practical significance and does not offer new insights” is patently false. Developing custom activation functions for better performance on specific tasks is something that human researchers regularly do, and this paper provides a way to automate this design process. Here is a concrete example: when modeling higher-order derivatives of a signal, periodic activation functions perform exceptionally well, while traditional activation functions like ReLU fail (https://arxiv.org/abs/2006.09661). Designing an activation function with the task in mind does not constitute overfitting! Similarly, one would not argue that CNNs have overfit to vision tasks or that RNNs have overfit to language modeling. Rather, these are models designed to exploit task-specific structure in the data. Our contribution is an automated method for designing activation functions that can also exploit task-specific structure to achieve better performance.
Third, we strongly disagree that the performance improvement is “marginal.” Our approach provided a full percentage point increase in accuracy over ReLU on four different tasks (Tables 1 and 2). This performance improvement is on par with other work in the literature, and it is substantial given that so much effort has already gone into optimizing models for CIFAR-100 and ImageNet.
Again, we appreciate your time in reviewing our paper, but many of the points you made contradict the facts in the paper. Thus, we hope you that will reconsider your point of view. | Summary: This paper introduces a set of benchmark datasets for activation function search, and an efficient search method based on the analysis of the benchmarks.
Strengths: 1. The proposed benchmark datasets are beneficial for further research.
2. The method that searches activation functions through the function outputs of a limited number of samples seems effective and can significantly outperform the random search baseline.
Weaknesses: 1. The paper is hard to follow. The main text refers to many details in Appendix, but it is still complex and hard to get to the method. I suggest the authors refine the structure and make the technical details of the proposed method more clearly.
2. Analysis is limited to show the efficiency of the method. The paper includes "Efficient" in the title, but I can only find the evaluation of efficiency in Appendix, and it should be compared with previous search methods to show how efficient it is. Besides, this method still needs to train multiple activation functions independently, which is also computationally expensive.
3. The improvements are marginal. The authors should compare their method with existing approaches in both benchmark search and new tasks search. Besides, in Table 1, comparing with the popular searched activation Swish, the improvements are marginal.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer N57C**
---
> I suggest the authors refine the structure and make the technical details of the proposed method more clearly.
Thanks for the suggestion. Many of these details are currently in the appendix. We will use the extra page in the camera-ready version to include more of them and to refine the structure.
---
> Analysis is limited to show the efficiency of the method…it should be compared with previous search methods to show how efficient it is.
To clarify, the efficiency comes from the number of evaluations needed to find a good function. Previous approaches like PANGAEA and the algorithm that discovered Swish evaluated thousands of activation functions before discovering the best ones. In contrast, AQuaSurF is orders of magnitude more efficient.
Figures 6 and 7 demonstrate this efficiency. In particular, Figure 7 shows how AQuaSurF outperforms all baseline functions on ResNet-56 in just the second function evaluation. We will revise the surrounding text to make these points clear.
Because the training setups and hyperparameters are not the same, the results from previous search methods are not directly comparable. However, a comparison can still be made by looking at the relative improvements gained by using a discovered activation function instead of ReLU (calculated as (new_acc - relu_acc) / relu_acc). For example, with ResNet-56 on CIFAR-100 AQuaSurF results in a relative improvement of 1.65%, and in the same scenario PANGAEA gives a relative improvement of 1.64%. Thus, the two methods discover similarly effective activation functions, but AQuaSurF does so much more efficiently: requiring 100 function evaluations instead of 1,000.
---
> Besides, this method still needs to train multiple activation functions independently, which is also computationally expensive.
Yes, but in some domains, it is well worth it: Spending additional compute to improve performance even a small amount may translate to significant money saved or lives improved. The important contribution here is that while previous methods required access to distributed computing environments, AQuaSurF can be run on a single commodity cloud instance. Previously, only for well-resourced labs were able to take advantage of activation function optimization; now it is possible for everyday practitioners, with many more applications benefiting from it.
Furthermore, this paper released three activation function benchmark datasets: Act-Bench-CNN, Act-Bench-ResNet, and Act-Bench-ViT. These resources make it possible to run search algorithms without a GPU at all. We expect these benchmarks to be a valuable resource for the community, enabling future work to improve efficiency even further.
Finally, recall that the activation functions discovered can be transferred to new tasks, even challenging ones such as ImageNet (Table 2). Even though the best performance comes from customizing activation functions to specific tasks, these activation functions can still improve performance in other domains.
---
> Comparing with the popular searched activation Swish, the improvements are marginal.
Swish can be seen as a state-of-the-art activation function, resulting from a significant effort to optimize activation functions. Thus, even a small improvement over Swish is significant. In some domains like medical diagnosis or stock trading, such small improvements can make a meaningful difference.
Moreover, Swish was developed in the context of tasks and architectures popular at the time, and it may not work as well in other contexts (as was already demonstrated with ResNet-v1-56 in Table 2 of the PANGAEA paper https://arxiv.org/pdf/2006.03179.pdf). It is thus not the only activation function we will ever need; instead, it is important to be able to reliably and automatically discover better activation functions for any task and architecture that may come up in the future. AQuaSurF’s sample efficiency will make it possible to improve performance in such new contexts as they arise.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response to my proposed questions. After reading them, part of my concerns are resolved. However, I do not see more competing results or explanations during the rebuttal phase. I am worried about the quite marginal improvement against the human-designed baselines, on which I agree with Reviewer UeiT and whether it is up to the standard of NeurIPS. Besides, apple-2-apple comparison in the empirical setting is important for the NAS community, which is also a weakness in this manuscript. With this regard, I tend to keep my original rating. | Rebuttal 1:
Rebuttal: **Additional Response to Reviewer x7yQ**
---
> How does AQuaSurF compare with PANGAEA in terms of performance? given the partial similarity of the search spaces, is it possible to make a direct comparison between the two methods (e.g. by limiting the space to non-parametric functions)?
In principle this experiment can be run, but there are a number of challenges, and we do not think the results would be informative enough to justify the compute cost.
PANGAEA is expensive to run. We could limit it to the same number of function evaluations as AQuaSurF, but then it is unlikely that it would discover anything useful (Figure 4, PANGAEA paper https://arxiv.org/pdf/2006.03179.pdf). We could also limit PANGAEA to non-parametric functions, but in this case as well it is unlikely that PANGAEA would discover good functions (Tables 3 and 4, PANGAEA paper). As noted in the response to Reviewer N57C, AQuaSurF gave a relative improvement of 1.65% over ReLU with ResNet-56 on CIFAR-100, while PANGAEA gave a relative improvement of 1.64%. Thus, it appears that the two algorithms are similarly capable, but AQuaSurF is orders of magnitude more efficient , achieving a comparable result in 100 function evaluations instead of 1,000.
Importantly, instead of being in competition with PANGAEA, AQuaSurF can be viewed as an enhancement of it. AQuaSurF used a similar search space as PANGAEA because it was shown to be powerful, and the main contribution was in efficiency gains, i.e., making activation function optimization so efficient that it can be run with commodity hardware as needed for new architectures and tasks. It is likely that the surrogate model in AQuaSurF could be synergistic with other search algorithms and search spaces, and improve their efficiency as well. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions | Accept (poster) | Summary: The paper introduces a novel self-supervised pretext task for Vision Transformers (ViTs), called DropPos. It aims to enhance the spatial reasoning or location awareness of ViTs, based on the observation that ViTs are often insensitive to the order of input tokens. DropPos works by dropping a large random subset of positional embeddings, then using the model to predict the actual position of each patch based solely on its visual appearance.
The paper identifies three major difficulties: (a) discrepancies between pre-training and fine-tuning (b) trivial solutions that fail to learn highly semantic representations by solving this simple task (c) patches with similar visual content. To prevent trivial solutions and increase task difficulty, this paper keeps only a subset of patches visible during the task. Given the potential similarity in visual appearances between different patches, the authors propose position smoothing and attentive reconstruction strategies. This relaxation allows for non-exact position reconstruction when exact positions are not critical. Quantitative evaluations demonstrate the effectiveness of DropPos, outperforming supervised pre-training and yielding competitive results against state-of-the-art self-supervised alternatives on various benchmarks.
Strengths: 1. The main claim is attractive. It would be quite interesting (and a bit counter-intuition) to see the vision transformers can learn a very good representation by such a simple patch predicting task, which is very coarse-grained. Particularly, the results of this paper look competitive to the mainstream pre-training tasks.
2. This work provides extensive ablation studies to verify their design and to reveal some insights.
3. It proposes Position smoothing and Attentive reconstruction to solve the problems like patches may share similar visual appearance.
Weaknesses:
1. The paper requires additional experiments and deeper analysis to substantiate some assertions.
For example, this paper starts from "Vision Transformers (ViTs) are quite insensitive to the order of input tokens, the need for an appropriate self-supervised pretext task that enhances the location awareness of ViTs is becoming evident.".
Although this claim fits my intuition, it is weird that after citing some papers [13, 39, 60], the paper does not talk about this claim any more in the following sections. What would happen if messing up the order of input tokens? Or even further, should this really be viewed as a drawback? Insensitiveness to the order may also be a good property. The authors touch on this in the related work, but this claim needs more attention since it's the main drive of the paper.
Also, it's unclear if DropPos helps this issue. More specifically, if a model is pre-trained by DropPos and then finetuned in ImageNet, would it be more sensitive to the order of input tokens? Looking into this would help us understand if the improved performance comes from the model being more "sensitive to the order of input tokens".
2. A recent work: Jigsaw-ViT: Learning jigsaw puzzles in vision transformer.
Another work Jigsaw-ViT proposes to include solving Jigsaw in the training of ViT, which is close to the task of this work. It is beneficial to include this for comparison.
[1] Chen, Yingyi, Xi Shen, Yahui Liu, Qinghua Tao, and Johan AK Suykens. "Jigsaw-ViT: Learning jigsaw puzzles in vision transformer." Pattern Recognition Letters 166 (2023): 53-60.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I am not really sure so I put this in Questions instead of Weaknesses. After checking the main paper and supplementary, it seems the authors did not mention if the pre-training is conducted with DropPos only, or together with other pre-training tasks like in MAE. Could the authors answer whether DropPos is the only training objective?
If the answer is yes, the reviewer would hope the authors can discuss a bit about why such a very coarse-grained task can be more powerful than the counterparts. Does it mean the dense visual cues are not important in pre-training?
Overall I am feeling this paper is a good trial, although it may need more in-depth analysis and appropriate discussion about relevant works.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer S2Rp for the valuable time and constructive feedback. Point-to-point responses
are provided below.
**Q1: Additional experiments and deeper analysis are required to verify the motivation.**
**A1:** We observed that the improved position sensitivity results in better feature representation and
benefits to downstream tasks. To verify this argument, we first propose a metric to evaluate the
model’s position sensitivity. Specifically, we freeze the backbone and train an extra linear position
prediction head using the vanilla cross-entropy loss. Top-1 accuracies of position predictions before
and after fine-tuning are reported, and 75% of position embeddings are randomly masked during
training. Higher values mean the model is better at modeling the position relationship. The top-1
accuracy on the ImageNet validation set after fine-tuning is also reported.
Please refer to Table 2 in the "global" response for detailed results.
As shown in the table, the backbone performs better in position prediction *after* fine-tuning, indicating
that image classification indeed needs strong abilities in modeling spatial relationships. It means that
better position sensitivity corresponds to better performances on downstream tasks. This evidence
suggests that our motivation, i.e., enhancing the location awareness of ViTs, is reasonable, and the
topic is worth studying. By designing a position prediction pretext task, the backbone pre-trained by
DropPos has better position modeling abilities, performing better on a variety of downstream tasks.
**Q2: Lack of comparison with [a].**
**A2:** We apologize for the missed comparison. JigsawViT [a] explores solving jigsaw puzzles as an
auxiliary objective in ViT for supervised image classification. The authors have demonstrated that the
proposed extra objective brings consistent and significant improvements in supervised tasks. However,
DropPos aims to design a brand new self-supervised pretext task that enhances the location awareness
of ViTs, which is totally different. Therefore, it is relatively hard to compare the performances of
these two works under the same benchmark.
**Q3: About the objective.**
**A3:** Sorry for the ambiguity. DropPos uses only the cross-entropy loss mentioned in the manuscript,
and the MSE loss used in MAE is not adopted. We will clarify this in our revised version.
**Q4: The reason why such a very coarse-grained task can be more powerful than its counterparts.**
**A4:** This is an interesting question. It may be because images are natural signals with heavy spatial
redundancy [27]. Towards this issue, some interesting works explore the best target representations
for masked image modeling, e.g., [36] and [52]. They empirically found that raw RGB pixels may
not be the best choice. Using coarse-grained targets such as HoG [52] features even performs better.
On the other hand, instance discrimination or contrastive learning is also a very coarse-grained task.
It ignores the possibility of different samples belonging to the same category.
Therefore, what matters in pre-training seems to be highly semantic clues, rather than dense cues. An
appropriate pretext task is still worth exploring.
**References**
[a] Yingyi Chen et al. Jigsaw-ViT: Learning jigsaw puzzles in vision transformer. Pattern Recognition
Letters 166 (2023): 53-60.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. After reading the comments and the rebuttal, I tend to keep the score. | Summary: This paper introduces a novel approach to self-supervised representation learning for vision transformers, focusing on enhancing their positional awareness. The authors proposed a new pretext task called DropPos, which involves reconstructing the positions of dropped tokens in partial observations. By leveraging DropPos, which is a spiritual adaptation of MAE, the authors consistently achieve superior results compared to the state-of-the-art (SOTA) techniques in standard downstream tasks, including image classification, object detection, and semantic segmentation.
Strengths: The paper effectively addresses the problem of incorporating inductive bias into vision transformers, and it takes an interesting approach by employing self-supervised learning (SSL) techniques. Enhancing the positional awareness of vision transformers is a significant aspect, as it can greatly improve their performance in downstream tasks that rely on location awareness and positional information. The proposed pretext task, DropPos, is simple yet effective, which aligns with the requirements of SSL. Additionally, such pretext tasks are efficient as they involve masking a substantial portion of the input data. The paper highlights that using tasks like DropPos eliminates the need for careful selection of target representation and mask strategy, as typically performed in mask image modeling. Furthermore, the paper is well-written and easy to follow.
Weaknesses: One potential weakness of the paper lies in its experimental evaluation. While the proposed pretext task is commended for its simplicity and effectiveness, it would have been valuable to compare the performance of a DropPos pretrained Vision Transformer (ViT) with a hierarchical transformer such as Swin, pretrained using mask image modeling [A, B]. This comparison would have provided insights into whether a carefully designed ViT architecture already addresses the need for positional awareness, rendering the pretext task redundant.
Furthermore, the paper lacks an in-depth analysis of what the ViT is actually learning and how it achieves positional awareness. Quantifying the extent of positional-aware representation learning is crucial. Analysis such as intertoken distance within a layer, sparsity of attention weights, and linear probe results are missing, which would have shed light on the underlying reasons for the success of the proposed pretext task.
A significant concern arises in the evaluation section where the authors reproduce most of the state-of-the-art (SOTA) results reported in other papers. However, the reproduced numbers are generally lower than the original reported results, without any mention of the differences. It would be important for the authors to investigate the causes of these discrepancies, considering factors such as differences in the experimental setup (e.g., number of GPUs, changes in batch size and learning schedules) or potential missing engineering tricks. Particularly concerning is the lack of significant improvement (wrt MAE) in object detection and segmentation results, which are crucial for spatial modeling.
Lastly, the authors missed referencing relevant literature on contrastive learning approaches designed for spatial modeling, such as [C]. In fact, [C] outperforms DropPos when trained for 200 epochs on the ADE20k dataset.
[A] Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai and Han Hu. SimMIM: A Simple Framework for Masked Image Modeling
[B] Xiang Li, Wenhai Wang, Lingfeng Yang, Jian Yang. Uniform Masking: Enabling MAE Pre-training for Pyramid-based Vision Transformers with Locality.
[C] Patch-level Representation Learning for Self-supervised Vision Transformers, CVPR 2022
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why do we need to compute Affinity as indicated in equation (9). Why can't we use the self-attention matrix (softmax(KQ'))?
2. In Table 3, it will be also interesting to check the results of 2->1.
3. What happens when ViT is pretrained with DropPos for longer epochs like 1600 epochs?
4. In fig 3, how do we know that which predictions are correct?
For rebuttal, please also refer to the points mentioned in the weakness section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper acknowledges a limitation in its conclusion, which the reviewer agrees with. Although the experiments conducted with ViT-B are deemed sufficient to demonstrate the potential of the proposed method, it is crucial to provide a comprehensive explanation and understanding of the model, regardless of its size. This ensures a robust analysis and comprehension of the proposed approach beyond the specific ViT-B architecture.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 6DGy for the valuable time and constructive feedback. Point-to-point responses
are provided below.
**Q1: DropPos with Swin.**
**A1:** We provide experiments when DropPos is equipped with Swin. We follow the implementation of
UM-MAE [a] and pre-train a Swin-Tiny from scratch. Please refer to Table 1 in the "global" response
for detailed results. From the table, we can tell that *even a carefully designed ViT architecture has
not addressed the need for positional awareness yet.*
**Q2: An in-depth analysis and the linear probing performance.**
**A2:** We answer this question in the following three perspectives.
(1) Following your suggestion, we illustrate the sparsity of attention weights and the inter-token
distance within a layer in Figure 1 and Figure 2 of the "global" response, respectively.
- Figure 1 demonstrates that compared with MAE, *features of DropPos tend to have more sparse attention maps, especially at shallow layers.*
- In Figure 2, we can conclude that compared with MAE, the shallow features of DropPos (depth 2 to 8) have lower distances, indicating smaller receptive fields. It means that *local patch relationships are more informative to help discriminate positions and enhance location awareness.*
(2) To provide an in-depth analysis, we propose a metric to evaluate the model’s position sensitivity
and explore the relationship between position sensitivity and performance on downstream tasks.
Specifically, we freeze the backbone and train an extra linear position prediction head using the
vanilla cross-entropy loss. Top-1 accuracies of position predictions are reported, and 75% of position
embeddings are randomly masked during training. Higher values mean the model is better at modeling
the position relationship. Please refer to Table 2 in the "global" response for detailed results. As
shown in the table, the backbone performs better in position prediction after fine-tuning, indicating
that *image classification indeed needs strong abilities in modeling spatial relationships*. It means that
*better position sensitivity corresponds to better performances on downstream tasks.* By designing a
position prediction pretext task, the backbone pre-trained by DropPos has better position modeling
abilities, performing better on a variety of downstream tasks.
(3) The linear probing accuracy of DropPos is 43.45% (ViT-B).
**Q3: About the reproduced numbers, the insignificant improvements over MAE on detection
and segmentation.**
**A3:** The difference is due to the training iteration. For detection and segmentation tasks, we first
download the pre-trained backbone and then perform end-to-end fine-tuning using the configuration
of ViTDet [33] and mmsegmentation [14] for COCO and ADE20k experiments, respectively. *For
efficient training, we perform 12 epochs of fine-tuning instead of 100 epochs on COCO, and 80k
iterations instead of 160k iterations on ADE20k*, following [48] and [18]. Therefore, although the
reproduced numbers are lower than their original numbers, we have conducted a fair comparison. We
will clarify this and try to conduct experiments with longer schedules in our revised version.
We would like to point out that the improvement over MAE on the ADE20k semantic segmentation
benchmark is significant (+0.8 mIoU). As for COCO experiments, the improvements seem to be
a little bit incremental may be because ViTDet [33] was originally tuned based on MAE, and we
did not tune any parameters. However, consistent improvements over MAE are observed on COCO
benchmarks, verifying the effectiveness of DropPos.
**Q4: Lack of comparison with [b].**
**A4:** We will add a brief discussion on contrastive learning approaches designed for spatial modeling
in our revised version. As for the performances, [b] used the multi-crop augmentation technique.
Therefore, the effective pre-training epoch should be $200 \cdot \frac{2 \cdot 224^2 + 8 \cdot 96^2}{224^2} \approx 700$ instead of 200.
Simply comparing DropPos pre-trained with only 200 epochs seems to be unfair.
**Q5: Affinity instead of self-attention.**
**A5:** The self-attention map is a reasonable alternative. However, we empirically found that using
affinity brings slightly better performances than using the self-attention map.
**Q6: More experiments in Table 3.**
**A6:** Appreciate! We conduct the suggested experiment. When $\sigma = 1 \to 0$, DropPos achieves 82.68 top-1 on ImageNet and 39.97 mIoU on ADE20k. This evidence indicates that reconstructing precise positions at the end of training is beneficial.
**Q7: DropPos with 1600 epochs.**
**A7:** We fail to pre-train a backbone with such a long schedule during the short rebuttal period due
to limited computational resources. However, we hypothesize that DropPos is expected to perform
better with a longer schedule, as the top-1 accuracy of position predictions is around 96% and still
has room to improve. We will try to add this experiment in the future.
**Q8: About Figure 3.**
**A8:** We apologize for the ambiguity. Figure 3 shows the qualitative results of position reconstruction
under different mask ratios γ. Black patches are masked during inference. The positions of those
white patches are wrongly predicted, while the remaining patches are predicted correctly. From the
figure, we can conclude that DropPos manages to reconstruct precise positions even under extremely
difficult situations (e.g., γ = 0.75).
**References**
[a] Xiang Li et al. Uniform Masking: Enabling MAE Pre-training for Pyramid-based Vision
Transformers with Locality. arXiv:2205.10063, 2022.
[b] Sukmin Yun et al. Patch-level Representation Learning for Self-supervised Vision Transformers.
In CVPR 2022.
---
Rebuttal Comment 1.1:
Title: Reproduced numbers
Comment: The reviewer acknowledges and appreciates the authors' efforts in addressing all raised concerns. It's acceptable to not include the model results from 1600 epochs. The incorporation of the Swin results, coupled with the in-depth analysis, will undoubtedly enhance the quality of the paper.
The reviewer accepts the model configuration for the COCO and ADE experiments as stated: "For efficient training, we perform 12 epochs of fine-tuning instead of 100 epochs on COCO, and 80k iterations instead of 160k iterations on ADE20k, following [48] and [18]."
However, there seems to be a discrepancy between the reported figures in this paper and those presented in [48] and [18]. This discrepancy has not been explained. For instance, the BootMAE results appear to have been under-reported. Additionally, it should be noted that the MAE results (trained for 1600 epochs) in the BootMAE paper outperform those of Droppos.
---
Reply to Comment 1.1.1:
Comment: We appreciate your response! We hope our clarifications below can address your concerns better.
First of all, we would like to clarify that HPM [48] and BootMAE [18] performed 160k iterations of fine-tuning for ADE20k semantic segmentation results, while we only adopt 80k iterations of fine-tuning.
Therefore, the discrepancy between the reproduced number and the reported number of BootMAE mainly lies in the results of COCO experiments.
Specifically, the reported numbers are 48.5 box AP and 43.4 mask AP, while the reproduced numbers are 47.3 box AP and 42.3 mask AP.
The reason should be the different code base. BootMAE was originally built using mmdetection [A], while the implementation of our DropPos is based on detectron2 [B]. The main difference between these two implementations is the input resolution. The input images of BootMAE are resized so that the shorter side is 800 pixels, while the longer side does not exceed 1333 pixels (as mentioned in the last paragraph of page 22), while DropPos takes 1024x1024 images as inputs (see [here](https://github.com/facebookresearch/detectron2/blob/main/projects/ViTDet/configs/common/coco_loader_lsj.py) for details).
We recognize that there seems to be a significant difference. However, as the detection code of BootMAE has not been made publicly available, and the default configuration of ViTDet in mmdetection (it takes 1024x1024 as the input resolution, see [here](https://github.com/open-mmlab/mmdetection/blob/main/projects/ViTDet/configs/lsj-100e_coco-instance.py) for details) is different from that of BootMAE, it is relatively hard for us to have a detailed check.
**References**
[A] Kai Chen et al. MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019.
[B] Yuxin Wu et al. Detectron2. https://github.com/facebookresearch/detectron2, 2019.
Thanks again for your time and consideration. Please let us know if you have any questions. We are always looking forward to an open dialog. | Summary: This paper presents a simple yet effective approach for generative self-supervised representation learning on images, namely DropPos. The proposed approach drops a large random subset of positional embeddings for visible tokens and classifies the actual position for these tokens via visual appearance. Experimentally, DropPos outperforms state-of-the-art self-supervised approaches on a wide range of downstream benchmarks including image classification, detection and segmentation.
Strengths: - The paper is well written and organized.
- The idea in this paper is simple yet effective, which brings something new in generative self-supervised representation learning.
- The authors provide clear implementation details (e.g., Pseudo-Code), which makes it easier to be reproduced.
- Detailed ablation study is conducted to verify the effectiveness of the proposed approach.
- Achieving SOTA performance on various downstream tasks.
Weaknesses: - As illustrated in (4), the cross-entropy loss is applied for dropped position supervision. Except for this loss, is additional loss used? e.g., the MSE reconstruction loss used in MAE.
- In MAE, more layers are used in the decoder (i.e., 8). Here the decoder only consists of 2 layers. Is it because the dropped position classification task is easier than the patch reconstruction task?
- What’s the overall training time for DropPos? Is it comparable or more efficient than existing self-supervised approaches?
- For completeness, please add more dropout-guided work for discussion, e.g., DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks (CVPR23), which similarly employs dropout mechanism, but for spatial-attention dropout in videos.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: No. See Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer HkQR for the valuable time and constructive feedback. Point-to-point responses
are provided below.
**Q1: About the objective.**
**A1:** DropPos uses only the cross-entropy loss mentioned in the manuscript, and the MSE loss used in
MAE is not adopted. We will clarify this in our revised version.
**Q2: About the decoder.**
**A2:** In fact, when using decoders with different depths, the fine-tuning accuracy of MAE is almost
the same (please refer to Table 1a in MAE [27]). We adopt a shallower decoder simply because it is
more efficient. Moreover, we provide an ablation over the decoder depth in the following table. We
take ViT-B as the backbone and all models are pre-trained with 200 epochs. From the table, we can
tell that DropPos appears to be robust against different decoder depths.
| # blocks | ImageNet | ADE20k |
|---|---|---|
| 2 | 82.96 | 40.68 |
| 8 | 82.88 | 40.05 |
**Q3: About the overall training time.**
**A3:** The training procedure of each iteration is as efficient as MAE, and the overall training time is
half that of MAE since DropPos is pre-trained with only 800 epochs.
**Q4: Lack of discussion with dropout-guided works.**
**A4:** Appreciate! We provide a brief discussion with representative dropout-guided studies in the
following. [a] adaptively performs spatial-attention dropout in the frame reconstruction to facilitate
temporal correspondence learning in videos, leading to a stronger temporal matching learner in visual
object tracking and segmentation. [b] adopts feature-level dropout to the common weak-to-strong
pipeline in semi-supervised semantic segmentation, bringing a broader perturbation space and thus
resulting in better performances.
To the best of our knowledge, DropPos is the first work that proposes a brand new self-supervised
pretext task using dropout on position embeddings.
**References**
[a] Qiangqiang Wu et al. DropMAE: Masked Autoencoders with Spatial-Attention Dropout for
Tracking Tasks. In CVPR 2023.
[b] Lihe Yang et al. Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic
Segmentation. In CVPR 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my concerns in a sufficient way. I would like to keep my former decision. | Summary: This paper introduces DropPos, a self-supervised pretext task designed to enhance the location awareness of Vision Transformers (ViTs). By dropping positional embeddings and reconstructing the positions of visible patches with some auxiliary strategies, DropPos improves spatial reasoning abilities in ViTs. Experimental results demonstrate the efficacy of DropPos, outperforming supervised pre-training and achieving competitive performance against state-of-the-art self-supervised methods. The paper provides good insights on enhancing location awareness in ViTs for future works.
Strengths: 1. The motivation is clear. The paper addresses the motivation of enhancing positional awareness and spatial reasoning abilities in vision transformers for pre-training.
2. The method is simple. The DropPos focuses on reconstructing dropped positions with some heuristics to avoid trivial solutions and ambiguities. Compare with the contrastive learning, it does not need complicated augmentations.
3. The method is effective. Compared to contrastive learning or masked image modeling, the proposed DropPos exhibits faster pretraining and slightly improved performance on several benchmarks.
Weaknesses: 1. The initialization of the positional encoding for the DropPos is not discussed, and there is no analysis of the impact of the different PE initialization strategies for the proposed DropPos method.
2. The experiments are insufficient to validate the motivation. Although preliminary evidence is provided by experiments on downstream tasks such as detection and segmentation, which indicates that the DropPos enhances the location-awareness of ViT, the paper lacks more in-depth and intuitive experimental analysis and discussion to verify the strengthening of position sensitivity of ViT by the DropPos method.
3. The description of the DropPos method is not specific and clear enough, including the method implementation, the flowchat of pseudo-code in section 3.2.
4. The scaling properties of the DropPos on ViT are not explored in depth. For example, compared with the MAE method, It is questionable whether the method remains effective on Vit-Huge compared to the MAE. Experiments on more advanced vision transformers, such as Swin-Transformer, are also encouraged to be conducted.
5. Compared with the MAE method, there is a lack of studies on transfer learning using iNaturalists and Places, and the robustness evaluation on ImageNet.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Regarding the DropPos method: Could you provide more specific details and clarity on the implementation of the DropPos method and the flowchart of the pseudo-code in Section 3.2? This would help readers better understand the proposed approach.
2. Strengthening of the validation of motivation. Could you enhance the experimental analysis and discussion to provide more depth and intuition regarding the impact of DropPos on the position sensitivity of ViT? Can you consider conducting additional analyses or visualizations to better illustrate and explain the observed effects?
3. Positional Encoding Initialization: It would be valuable to discuss the initialization strategy for positional encoding in the DropPos method. How was it initialized, and were different initialization strategies explored? Analyzing the impact of different positional encoding initialization strategies on the performance of DropPos would enhance the comprehensiveness of the study.
4. Scaling Properties and Generalization: Could you further investigate and discuss the scaling properties of the DropPos method on larger architectures, such as ViT-Huge? Is the method equally effective and robust on larger-scale models? Additionally, have you considered applying the DropPos method to other advanced vision transformers like Swin-Transformer and evaluating its performance?
5. Transfer Learning and Robustness Evaluation: It is noted that there is a lack of studies on transfer learning using iNaturalists and Places datasets, as well as the robustness evaluation on ImageNet. I would like to see if the DropPos method remains effective on these benchmarks.
6. The Fig.~3 about the qualitative results of position reconstruction is somewhat confusing and requires a clearer explanation.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer e3Vr for the valuable time and constructive feedback. Point-to-point responses
are provided below.
**Q1: The initialization of the positional encoding.**
**A1:** DropPos uses fixed 2D sin-cos position embeddings by default. We ablate the initialization of
position embeddings in the following table and it demonstrates that fixed sin-cos position embeddings
achieves the best performance. We will add this in our revised version.
| Initialization | Learnable | ImageNet | ADE20k |
|---|---|---|---|
| sin-cos | × | 82.96 | 40.68 |
| sin-cos | √ | 82.81 | 39.37 |
| random | √ | 82.48 | 38.72 |
**Q2: The strengthening of improved position sensitivity.**
**A2:** To systematically answer this question, we propose a metric to evaluate the model’s position
sensitivity and explore the relationship between position sensitivity and performance on downstream
tasks. Specifically, we freeze the backbone and train an extra linear position prediction head. Vanilla
cross-entropy loss is used for training. Top-1 accuracies of position predictions before and after
fine-tuning are reported, and 75% of position embeddings are randomly masked during training.
Higher values mean the model is better at modeling the position relationship. The top-1 accuracy on
the ImageNet validation set after fine-tuning is also reported.
Please refer to Table 2 in the "global" response for detailed results.
As shown in the table, the backbone performs better in position prediction after fine-tuning, indicating
that *image classification indeed needs strong abilities in modeling spatial relationships*. It means that
*better position sensitivity corresponds to better performances on downstream tasks*. By designing a
position prediction pretext task, the backbone pre-trained by DropPos has better position modeling
abilities, performing better on a variety of downstream tasks.
**Q3: The implementation of DropPos is not specific and clear enough.**
**A3:** We apologize for the ambiguity. To clarify the flowchart of DropPos, we provide the pseudo-code
for computing the objective of DropPos. Please refer to the PDF in the "global" response for details.
We will add this and polish the method section to make it clearer and more readable.
**Q4: The scaling properties and generalization of DropPos.**
**A4:** The scaling property is worth studying when evaluating the effectiveness of a self-supervised
algorithm. Comparing performances between ViT-B and ViT-L, we can conclude that as the number
of parameters in the model increases, the performance of DropPos is improving. However, the
ViT-Huge training cannot be finished in the short rebuttal period, we will try to provide the results in
the future revision.
To verify the scaling property and the generalization of DropPos, we provide experiments when
DropPos is equipped with the Swin Transformer. We follow the implementation of UM-MAE [a] and
pre-train a Swin-Tiny from scratch using DropPos. All models are pre-trained with 200 epochs and
fine-tuned with 100 epochs, following the configuration of UM-MAE [a].
Please refer to Table 1 in the "global" response for detailed results.
From the table, we can conclude that DropPos still works on Swin Transformers, and thus enhancing
the location awareness of vision transformers is worth studying.
**Q5: Lack of transfer learning results and the robustness evaluation.**
**A5:** We recognize that these results are important to verify the generalization of a self-supervised
algorithm. However, MAE evaluated the performance on iNaturalists and Places365 by fine-tuning
on target datasets and the fine-tuning code has not been made publicly available. We will try to add
these experiments in future revisions.
We conduct robustness evaluation on ImageNet-Adversarial [b] and ImageNet-Rendition [c] using the
same models fine-tuned on the original ImageNet and only run inference on the different validation
sets in the following table, which is exactly the same as MAE. As shown in the table, with only 800
epochs of pre-training, DropPos achieves comparable or even better performances, demonstrating its
robustness.
| Method | Backbone | Epoch | ImageNet-A [b] | ImageNet-R [c] | ImageNet |
|---|---|---|---|---|---|
| MAE | ViT-B | 1600 | **35.9** | 48.3 | 83.6 |
| DropPos | ViT-B | 800 | 35.5 | **48.8** | **84.2** |
| MAE | ViT-L | 1600 | **57.1** | **59.9** | **85.9** |
| DropPos | ViT-L | 800 | 56.7 | 59.8 | 85.8 |
**Q6: Figure 3 requires a clearer explanation.**
**A6:** We apologize for the ambiguity. Figure 3 shows the qualitative results of position reconstruction
under different mask ratios γ. Black patches are masked during inference. The positions of those
white patches are wrongly predicted, while the remaining patches are predicted correctly. From the
figure, we can conclude that DropPos manages to reconstruct precise positions even under extremely
difficult situations (e.g., γ = 0.75).
**References**
[a] Xiang Li et al. Uniform Masking: Enabling MAE Pre-training for Pyramid-based Vision
Transformers with Locality. arXiv:2205.10063, 2022.
[b] Dan Hendrycks et al. Natural adversarial examples. In CVPR, 2021.
[c] Dan Hendrycks et al. The many faces of robustness: A critical analysis of out-of-distribution
generalization. In ICCV, 2021. | Rebuttal 1:
Rebuttal: To all reviewers:
Thank you so much for your careful review and suggestive comments. Following your suggestions, we present some extra figures and tables in the PDF. We also provide the pseudo-code for computing the objective of DropPos. Specifically,
- **@Reviewer e3Vr**, to clarify the flowchart of DropPos, we provide the pseudo-code for computing the objective in Algorithm 1.
- **@Reviewer 6DGy**, to evaluate the sparsity of attention maps of DropPos, we compare attention maps generated by MAE and DropPos at the first and the last Transformer blocks in Figure 1.
- **@Reviewer 6DGy**, to measure the receptive field of DropPos, we compare the mean attention distance of MAE and DropPos in Figure 2.
- **@Reviewer e3Vr** and **Reviewer 6DGy**, we conduct experiments in Table 1 to verify that DropPos also works with Swin Transformers.
- **@Reviewer e3Vr**, **Reviewer 6DGy**, and **S2Rp**, to explore the strengthening of improved location awareness, we provide an in-depth and intuitive experimental analysis in Table 2, where we freeze the pre-trained backbone and train an extra linear patch classification head.
Sincerely,
Authors.
Pdf: /pdf/669b7375219248e56d630765bb9026743ef5e026.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper a method for self-supervised representation learning. Given a ViT architecture, the authors propose to predict the absolute position of masked positional embedings at random. Although the general direction is not new, the authors pose it in a simple an interesting way, that achieves good performance in the downstream tasks.
Strengths: 1) The proposed ides is simple yet effective, and is executed well.
2) The presented is presented properly and is easy to follow.
3) The authors evaluate their method properly on several downstream tasks, achieving acceptable performance.
Weaknesses: The direction of obtaining a supervision signal from the patch positions in the ViT architecture has been explored before. The authors mention the majority of them in their related work section but do not discuss what the advantage of their proposed method is. The performance supports the effectiveness of their method, but they could discuss more explicitly their advantages and ablate on that. Moreover, a comparison with [1] would be informative.
[1] Sameni et al., Representation Learning by Detecting Incorrect Location Embeddings, In AAAI 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, they have.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer Pgbn for the valuable time and constructive feedback. Point-to-point responses
are provided below.
**Q1: The advantage of the proposed DropPos should be discussed explicitly and lack of
comparison with [a].**
**A1:** Appreciate! We would like to discuss the advantage of our DropPos explicitly. As mentioned in
the manuscript, there are three main difficulties in designing position-related self-supervised pretext
tasks for ViTs. Specifically, (1) eliminating the Discrepancies between pre-training and fine-tuning,
(2) avoiding trivial solutions to ensure highly semantic representations, and (3) reducing the impact of
confusing targets caused by similar visual appearances. All other methods cannot manage all of them,
which is summarized in the following table.
| Method | Eliminate Discrepancies | Avoid Trivial Solutions | Remove Confusing targets |
|-------------------|-------------------------|-------------------------|--------------------------|
| Zhai et al. [60] | × | × | × |
| Caron et al. [4] | √ | × | √ |
| Sameni et al. [a] | √ | × | √ |
| DropPos | √ | √ | √ |
Specifically, Zhai et al. [60] simply discard all positional embeddings during pre-training, and
thus discrepancies arise. Therefore, the fine-tuning performances of [60] largely lag behind the
state-of-the-art.
Caron et al. [4] propose to predict the relative location of a local crop to the corresponding global
crop, making it time-consuming and hard to learn highly semantic representations as this pretext task
is somewhat too simple for powerful ViTs. ViTs may simply solve this task by comparing the texture
of two given crops.
Sameni et al. [a] come up with an auxiliary position-related objective and combine it with the popular
contrastive learning paradigm. Therefore, the generalization abilities of learned representations
are highly related to data augmentation techniques. Also, the position-related task itself proposed
by [a] (without contrastive learning) may become a trivial solution, as identifying several mismatched
positions is relatively easy for powerful ViTs.
DropPos solves the mentioned three difficulties by dropping a subset of position embeddings, dropping
a large subset of patch tokens, and position smoothing and attentive reconstruction, respectively.
Despite these advantages, the pre-training procedure of DropPos is ≈ 3× more efficient than [60, 4,
a] thanks to the patch masking stage.
In fact, all these three advantages have been ablated in the manuscript. Specifically, we ablated
the effectiveness of alleviating discrepancies in Table 2 (γ_pos = 0.75 *v.s.* γ_pos = 1), where an
improvement of +0.3% fine-tuning top-1 accuracy is observed. The second advantage is ablated
in Table 1 (γ = 0.75 *v.s.* γ = 0), where an improvement of +1.02% fine-tuning top-1 accuracy is
observed. The effectiveness of eliminating confusing reconstruction targets is ablated in Table 3
(σ = 1 → 0 *v.s.* σ = 0) and Table 4 (τ = 0.1 *v.s.* τ = ∞), where improvements of +0.11% and
+0.12% fine-tuning top-1 accuracies are observed.
We will emphasize these results in the revision to further help address the concerns.
**References**
[a] Sepehr Sameni et al. Representation Learning by Detecting Incorrect Location Embeddings. In
AAAI 2022. | null | null | null | null | null | null |
Variance-Reduced Gradient Estimation via Noise-Reuse in Online Evolution Strategies | Accept (poster) | Summary: The study considers numerically estimating gradients for online (reinforcement) learning.
The parameters are perturbed and from the performance of the perturbed models the gradient is estimated.
In an online setting, the proposed method decouples the number of steps between gradient estimates (which are then used by a first order gradient based optimiser) and the number of steps between changing the perturbation.
Two aspects are relevant in the paper: The variance of the gradient estimate and the ability to parallelise the methods.
Strengths: I could not find anything that I would call wrong in the manuscript.
Weaknesses: Allmost all research is to a certain degree incremental. The main problem of the research presented in the paper that is that the increment is rather small, it lacks originality and novel „surprising“ insights.
The problem setting is not new.
The proposed algorithm, as properly stated in the manuscript, can be viewed as a slight generalisation of „Persistent evolution strategies“ [13].
The canonical way to apply a Monte Carlo method such as ES to non-episodic RL tasks is to split the state-action-reward sequence into „pseudo-episodes“ . This is also done/suggested in other other non-ES RL algorithms, e..g., in Proximal Policy Optimization. This is also sometimes referred to as truncation.
The problem of how many (pseudo-)episodes are needed to evaluate a perturbation to get a reliable signal to be exploited by ES for RL has also been studied, for example see
Heidrich-Meisner and Igel. Hoeffding and Bernstein Races for Selecting Policies in Evolutionary Direct Policy Search. ICML 2009
Two aspects are relevant in the paper: The variance of the gradient estimate and the ability to parallelise the methods. The first one leads to a non-surprising result, which was simultaneously discovered in another work, properly cited by the authors: [19].
In my evaluation, the patalllization, while being useful in practice, is rather technicality.
The discussion the (parallel) runtime is partly not convincing.
I understand that
1. FullES takes O(T) time, it sees the states from s^+_0 to s^+_T (and s^-_0 to s^-_T)
2. calling Algorithm 2 takes O(W) time, it sees the states s^+_tau to s^+_tau+W (and s^-_tau to s^-_tau+W)
3. T/W independent calls of Algorithm 2 can be fully parallelised, so on T/W nodes taking the average over T/W independent calls starting from the same state s+ / s- also takes O(W) time
(Please correct me if I am wrong).
So I think „Under perfect parallelization, the entire NRES gradient estimation would require O(W) time to complete. In contrast, the single FullES gradient estimate has to traverse the UCG from start to finish, thus requiring O(T) time. Hence, NRES is T /W times more parallelizable than FullES with the same compute budget.“ is a bit misleading. Because a single FullES gradient estimate sees all steps from 0 to T, while Algorithm 2 looks at several „parallel“ trajectories starting from the same state running for W steps. I think this is qualitatively different. In the RL setting, you cannot in general not replace information collected over a time period with information collected from an ensemble where the ensemble members start from the same state. Think of an RL task where the rewards are always zero in the first 0.99*T steps.
The „Instead, we aim to compare the pure performance of different ES methods assuming perfect parallel implementations. To do this, we measure a method’s performance as a function of the number of sequential environment steps it used. Sequential environment steps are
steps that have to happen one after another (e.g., the environment steps within the same truncation window). However, steps that are parallelizable don’t count additionally in the sequential steps.“
This means that global random search, which evaluates 10^50 independent random policies has only a cost of one?
This would then be the ultimate baseline method, outperforming all proposed algorithms.
The manuscript lacks clarity. For example, some statements about „chaotic loss surfaces“ are confusing. Being chaotic is a well defined mathematical property.
It starts to make sense in the experiments. The Lorenz system is chaotic, which inmates the first place refers to perturbations of the system state.
The experiments always start from the same initial state. Optimised are the coefficients of the Lorenz system. What is the exact definition of a „chaotic loss surfaces“? Where is the proof that the loss surface is really chaotic when changing r and a (citation missing).
A comparison with an evolution strategy that uses the perturbations also for the update, such as (1+1) CMA-ES with restricted covariance matrix) would have been interesting.
Hyperparameter tuning seems to be crucial. The description in the appendix is appreciated.
The Lorenz training suffers from instabilities. There are countermeasures, like gradient clipping.
One could also move to an optimiser that decouple gradient magnitude and update step length, for example Adam. Adam is used later, but for the
Lorenz experiments the manuscript sticks to SGD. How are the results with Adam (and corresponding learning rate tuning)?
This may be important because the experiments lead to to strong statements „AD methods perform worse than ES methods“ (Appendix) and one has to rule out that the problem is not the optimiser but really the AD as such.
Minor:
* Instead of talking about a trick and adding a reference, simply state that you use \nabla_theta E_x~p(x|theta) [ f(x) ] = E_x~p [ f(x) \nabla_theta \ln p(x|theta), that is clearer (referring to this as a trick makes me cringe).
* The embedding in the existing literature is weak. One may not need no go back to Rechenberg and Schwefel when talking about evolution strategies (although it does not harm), but citing [12] as the only source for evolution strategies for RL is in my evaluation not OK.
* The first epsion in line 73should be in bold font.
* The hyperparameters are relevant, the main paper should point out to the appendix where the hyperparameters selection is discussed more clearly.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors:
* What is the exact definition of a „chaotic loss surfaces“? Where is the proof that the Lorenz „loss surface“ is really chaotic when changing r and a?
* Under the used „sequential environment steps“ measure, would a global random search algorithm that which evaluates 10^50 independent random policies only have a cost of one? Would this be the ultimate baseline method, outperforming all proposed algorithms?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: OK
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Originality of contribution**.
It is correct that NRES, while being simpler to implement than PES, is not a large algorithmic deviation from PES. However, the insight that reusing noise in truncation windows can both theoretically and empirically achieve significant variance reduction for unbiased online ES methods is a major and novel contribution of our paper. We want to emphasize that it is the combination of 1) noise reuse and 2) truncation that makes our contribution unique and novel. In contrast, the references the reviewer provides only consider either of these two ideas individually: on the one hand, it is true that the non-ES literature has explored the idea of truncation windows before, but such prior works have not considered the noise reuse aspect over multiple such windows. On the other hand, although the paper by Heidrich-Meisner and Igel evaluates a perturbation ($\theta$ plus noise) multiple times (similar to the idea of noise reuse), every such evaluation is an independent sample over the entire episode but not over a random truncation window. We believe the existence of these prior works on each concept alone does not diminish the contribution of our work. In addition, we note that we have provided a detailed discussion in Section B (line 507-541) of the various aspects our work differs and improves over the contemporaneous work of [19] in analyzing online ES algorithms.
**Understanding the parallelization argument**.
Unfortunately, the reviewer’s summary of how NRES is parallelized is incorrect. The reviewer assumes that all the parallel $T/W$ NRES workers would start unrolling from the same pair of antithetic states $s_+$ and $s_{-}$, thus covering the same truncation window. However, as shown in Figure 2(a), for an NRES update, each NRES worker would each maintain its own pair of antithetic states (see Algorithm 2) and would independently work at different truncation windows (we call these independent workers _step-unlocked_ (line 90-92)). By independence, different workers’ truncation windows should largely be non-overlapping and collectively cover almost all of the possible truncation windows in an episode. Thus the reviewer’s question of having no NRES worker cover the last $1\\%$ of an episode length for multiple updates would not be possible under our assumption and implementation.
**Measuring the number of sequential steps of global random search**.
In our RL Mujoco experiments, we have constrained the total number of environment steps per gradient update to be equal for all the ES methods (line 289). For example, on the Swimmer task, both FullES and NRES use a total of 6000 steps per update. This would have prevented the reviewer’s degenerate case of using $10^{50}$ of parallel workers in global random search at the cost of only a single episode worth of sequential steps $T=1000$. Besides, using $10^{50}$ episodes would also compare unfavorably in the total number of environment steps needed to solve a task, where we also show NRES is the best among all the ES methods.
**What does _chaotic_ mean?** Mathematically, a function being chaotic means the function’s output is extremely sensitive to small changes in the function’s input. However, chaotic can be used to describe different functions in different contexts: when we say the Lorenz system is chaotic, the chaotic function is the dynamical system which maps from an initial state to a future state evolved under its dynamics. In contrast, _when we say the Lorenz system has a chaotic loss surface, the chaotic function is the loss function, which maps from the learnable parameter $\theta$ to its loss value_. Prior works on TES and PES have both used the term “chaotic” to describe the loss surfaces arising from unrolled computations. Thus we believe we use the term “chaotic” clearly and in a manner consistent with prior work. We have also shown that the Lorenz task’s loss surface is indeed chaotic through the visualizations in Figure 4(a) (left panel) and Figure 7. For example, in Figure 7(b), we see that the loss has extreme oscillations on the line segment connecting the initialization and ground truth parameters. If the reviewer believes there are other portions of our paper that lack clarity, please let us know.
**Comparison against (1 + 1) CMA-ES with restricted covariance matrix**.
Our paper focuses on online evolution strategies that use truncation windows for gradient estimation. We have compared against the offline method FullES because the online ES methods stem from it and prior works haven’t theoretically and experimentally analyzed the benefit of online ES over FullES. However, beyond FullES, we believe other offline ES methods (such as (1+1) CMA-ES) do not fit into the comparison scope of our current paper. We leave holistic comparisons of different classes of ES methods as future work.
**Adam or gradient clipping cannot help automatic differentiation methods learn on the Lorenz task**.
When the loss surface has extreme sensitivity and many suboptimal local minima, not only can the gradient’s magnitude become extremely large, but the gradient’s direction can also become non-informative. The alternative methods the reviewer proposed (gradient clipping or Adam optimizer) can potentially handle the cases of extreme gradient magnitude, but are not suitable to handle such non-informative gradient direction problems. Experimentally, we have tried using Adam with learning rates spanning over 9 different orders of magnitude ($10^{-1}$ to $10^{-9}$) and also gradient clipping over 3 orders of magnitude of thresholds. Under all of these optimizer hyperparameters, we are unable to make the de facto automatic differentiation methods backprop through time (BPTT) to reach a loss of lower than 300 (initialization loss is at 312) on the Lorenz task (where NRES and FullES can reach a loss lower than 50), thus proving the issue really lies in the AD gradient estimation methods but not in the chosen optimizers.
---
Rebuttal Comment 1.1:
Comment: Coming back to my questions:
Question 1: It is clear and well defined what a chaotic system is; it is clear what the Lorentz system is and for which values of its parameters it is a chaotic system.
The chaotic behaviour refers to the there state variables (often denoted x, y, z) of the system.
The objective/fitness function is a function of (some of the) the parameters (typically denoted by sigma, rho, and beta) of the Lorentz system.
How the properties of the Lorentz system carries over to the objective function is not clear to me.
Now, what defines a „chaotic loss surfaces“?
Question 2: „Under the used „sequential environment steps“ measure, would a global random search algorithm that which evaluates 10^50 independent random policies only have a cost of one?“ So, the answer to this question is yes?
---
Reply to Comment 1.1.1:
Title: Further Explanation to the Reviewer's Questions
Comment: Thanks for your reply. We believe our rebuttal has answered these questions (see paragraphs on “__What does chaotic mean?__” and “__Measuring the number of sequential steps of global random search__”). However we discuss these questions in more detail below in case our previous response was unclear.
__Answer to Question 1__: The reviewer asks how the chaotic property of the Lorenz system carries over to the objective function. We provide a detailed explanation in the paragraph below. However, before doing so, we note that this question is not the focus of our work: the aspect that is relevant to our experiments is that this dynamical system parameter learning problem indeed has _a loss objective function that exhibits extreme sensitivity (oscillations) with respect to small changes in its input learnable parameters_ (__this is what we refer to as a chaotic loss surface__). This terminology is consistent with its usage in prior works on online evolution strategies methods [1,2], and we have empirically shown the Lorenz loss objective function is chaotic as it has high degrees of fluctuation when small changes occur in the parameter space through Figure 4(a) and Figure 7.
In terms of why the chaotic property of the Lorenz system carries over to the objective function, we summarize the discussion in [3], which has explained this relationship. Since the Lorenz system is a chaotic dynamical system, the state variable at a later time $s_t$ is highly sensitive to changes in the state variable at a prior time $s_i$, $i < t$. This means that the Jacobian matrix between the two states ($\frac{d s_t}{d s_i}$) has some large singular values. Because the total derivative of the loss function $L_{\textrm{avg}}$ with respect to the Lorenz system parameter $\theta$ is related to the Jacobian matrix through the following relationship (equation (7) and (8) in [3]):
$\frac{dL_{\textrm{avg}}}{d\theta} = \frac{1}{T} \sum_{t=1}^T [\frac{\partial L_t}{\partial \theta} + \sum_{i=1}^t \frac{\partial L_t}{\partial s_t} {\color{blue} \frac{d s_t}{d s_i}} \frac{\partial s_i}{\partial \theta} ],$
the magnitude of the loss gradient $\frac{dL_{\textrm{avg}}}{d\theta}$ will also be large due to the existence of the large singular values in the Jacobian matrices $\\{{\color{blue} \frac{d s_t}{d s_i}}\\}$. Having these high magnitude gradients implies that the loss values would have extreme sensitivity to small changes in the system parameters ($\theta$), thus resulting in the chaotic behavior of the loss function.
[1] Metz, L., Maheswaranathan, N., Nixon, J., Freeman, D., & Sohl-Dickstein, J. Understanding and correcting pathologies in the training of learned optimizers. ICML, 2019.
[2] Vicol, P., Metz, L., & Sohl-Dickstein, J. Unbiased gradient estimation in unrolled computation graphs with persistent evolution strategies. ICML, 2021.
[3] Metz, L., Freeman, C. D., Schoenholz, S. S., & Kachman, T. Gradients are not all you need. arXiv:2111.05803, 2021.
__Answer to Question 2__: The answer to this question is “no”, as the cost would be significantly larger for global random search in our empirical setup. To see this, we note that we have constrained all the methods in our experiments to use the same number of environment steps per gradient update (line 289) for fair comparison. If the reviewer insists on a comparison to global random search (which is not a gradient-based local update method, as we focus on in this work) in our experiment setup, to appropriately account for costs we should similarly consider __the cost of global random search as an iterative update method__: global random search keeps track of the best $\theta_*$ (with the lowest loss value) seen so far, and for each update, it randomly samples a number of parameters $\\{{\theta_i}\\}\_{i=1}^N$, evaluates their individual loss in parallel, and updates $\theta_*$ with the best $\theta_i$ if it has an even lower loss. From this perspective, we need to constrain global random search to also __use the same number of environment steps per update__ just as we have constrained the ES methods. For example, on the Swimmer task, because we make both FullES and NRES use only 6000 environment steps (a total of 6 episodes) per update, global random search will also be allowed to evaluate only $6$ different randomly drawn $\theta_i$ (instead of $10^{50}$) in parallel for an update (which would cost an episode-length ($T=1000$) number of sequential unrolls). Thus to evaluate $10^{50}$ episodes in total, global random search would require $10^{50} / 6 \approx 1.67 \times 10^{49}$ updates and a total of about $1.67 \times 10^{52}$ sequential steps. Global random search would therefore be expected to perform quite poorly against the methods considered in our paper.
Thank you for engaging with us during the rebuttal period. Please let us know if you have any further questions, and in particular if there are any other points from your initial review that have not yet been addressed. | Summary: This paper generalizes PES based on noise-reuse, generating a more general class of unbiased online ES gradient estimators. The authors analytically characterize the variance of the estimators and identify the lowest-variance estimator named Noise-Reuse Evolution Strategies (NRES). Experiments on learning dynamical systems, meta-training learned optimizers, and reinforcement learning show that NRES results in faster convergence than existing AD and ES methods in terms of wall-clock time and number of unroll steps.
Strengths: 1. The paper proposes a general framework GPES by generalizing the existing PES algorithm based on noise-reuse.
2. The paper gives a theoretical analysis of unbiasedness and variance for GPES, and identifies the lowest-variance gradient estimator under this setting named NRES, which always reuses noise.
3. The paper proves that under some reasonable conditions, NRES has lower variance than FullES.
Weaknesses: 1. The proposed GPES is a simple and direct generalization of the existing PES. The novelty is not very strong.
2. The experiments need to be improved, e.g., adding the empirical comparison on longer sequences and higher dimension problems.
3. Why GPES can be better than PES? An intuitive explanation is needed.
Minor issue: Step-unlocked is used without explanation.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. In Theorem 8, it's said that the required assumption often holds in real-world applications. Can you give more explanation and evidences?
2. In the experiments, the x-axis of figures is measured by wall-clock time. Why not use the number of iterations?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Novelty of the contribution**. We agree with the reviewer that our proposed class of unbiased online ES gradient estimators GPES is a simple, intuitive generalization of PES. (In fact, we view the simplicity of the method as an important practical benefit.) However, _we respectfully disagree that our work lacks novelty_, as the main contributions of our paper aren’t only in introducing this class but more importantly in:
1. providing a theoretical characterization of the total variance of this class of estimators;
2. provably identifying the estimator NRES has the least total variance within this class;
3. empirically verifying the improvement of NRES over PES and other GPES estimators.
**More experiments on longer sequences and higher-dimensional problems**.
We believe our current experiments’ scale is already sufficient to prove NRES’s advantage over other ES baselines, and note that the scale is comparable to the scale considered in the most relevant prior work PES (which also uses $d \le 2000$ and $T$ around $1000$). However, to further demonstrate our algorithm NRES’s ability in handling higher-dimensional and longer sequence problems, we provide more experiments in the common response (Figure 1 in the attached pdf). To summarize the result, we increase the parameter dimension $d$ and sequence length $T$ each by (more than) $10\times$ on the learned optimizer task. For both cases, our proposed method still achieves significant wall-clock savings over all other ES methods.
**Intuitive explanation of why GPES can be better than PES?**
We do not claim that any GPES estimator is better than PES. Instead, we prove through Theorem 5 and Corollary 6 that NRES is better than other GPES estimators (including PES) due to its least amount of total variance. We thus assume the reviewer is asking for the intuition of _why NRES has lower variance than PES_ (please let us know if that’s not the case):
Let’s consider size $W=1$ truncation window for simplicity. When PES and NRES unroll over the same time step $t$, both methods aim to estimate the **total derivative** of step $t$’s smoothed loss ($\widehat{L_t}$) with respect to $\theta$: $\frac{d\widehat{L_t}([\theta]\_{\times t})}{d\theta}$ (notice $\theta$ has been repeatedly applied $t$ times to compute this loss). This total derivative is equal to the sum of partial derivatives $\frac{d\widehat{L_t}}{d\theta} = \sum_{i=1}^t \frac{\partial \widehat{L_t}}{\partial \theta_i}$, where each term is with respect to the $i$-th time step’s application of $\theta$. By applying a new gaussian noise at every time step, PES can produce an unbiased estimate of each partial derivative. In contrast, because NRES uses the same gaussian noise for all time steps, it cannot individually estimate each partial but only their sum. However, to optimize $\theta$, we only need this sum (i.e. the total derivative) but not the individual terms. _The cost PES pays for having a separate yet unused estimation for each partial derivative is a larger variance than NRES_, since it needs more randomness to obtain this extra information. We will add this intuitive explanation to the paper as a remark.
**Understanding why Theorem 8’s assumption often holds in real world**.
To understand our claim that Theorem 8’s assumption often holds in the real world, we can reorganize our explanation in line 200-207 in three steps:
1. In many real world applications of unrolled computation graphs, if we update the parameter $\theta$ using the loss gradient from a specific truncation window, the losses in other truncation windows will also decrease. This is because there is a correlation between performing well at different time steps. For example, in a learned optimization task, the inner model parameters gradually improve during the training by a learned optimizer. Here, if the inner model over the last truncation window achieves lower validation losses (thus improved generalization), the earlier windows’ inner model snapshots will likely also generalize better and have lower validation losses.
2. Given Step 1’s observation, we can conclude that different truncation window losses’ total derivatives with respect to the same $\theta$ should largely point in the same direction. (Otherwise Step 1 would not be observed).
3. When different truncation windows’ gradients are pointing in similar directions, the assumption (Equation (10)) in Theorem 8 would hold true. To see this intuitively, consider the extreme case where each window’s gradient (the vector $\sum_{t=W(k-1) + 1}^{Wk} g^t$) lies exactly in the same direction (on the same line). In this case, Equation (10) is almost trivially satisfied.
With the above three steps, we see that Theorem 8’s assumption would often hold true in the real world. We have also provided empirical verification of Theorem 8’s conclusion under this assumption in Figure 3(b) in the main paper, which provides further evidence that such assumptions would hold in real applications.
**Why use wall-clock time instead of number of iterations?**
Since our work focuses on parallel ES methods, we choose measures of computation that can allow us to compare the parallelizability of these algorithms in terms of actual/theoretical time they take to run. Number of iterations is not such a measure, because _different algorithms could require significantly different amounts of time to run while still using the same number of update iterations and total computation_. For example, given the same computation budget of $2T$ unrolls per update, FullES can only afford to run 1 worker from start to finish, thus taking $T$ units of time per update. In contrast, NRES can simultaneously run $T/W$ independent workers each over a truncation window of length $W$, in total taking only $W$ units of time because of the parallelization. In this case, NRES can achieve a $T/W\times$ time speed up over FullES even if they use the number of iterations to converge (see line 191-195).
---
Rebuttal Comment 1.1:
Title: Follow up hYKW
Comment: Dear Reviewer hYKW,
We would appreciate if you would you be so kind as to acknowledge and respond to the authors' rebuttal. This is crucial to ensure the reviewing process is conducted adequately.
AC
---
Rebuttal Comment 1.2:
Title: Thanks for your detailed response.
Comment: Most of my concerns have been addressed. I will keep my evaluation. | Summary: This work proposes a method for optimizing unrolled computation graphs (e.g. recurrent networks, etc.). When using ES to optimize a computation graph, the graph must be fully rolled out. Recent methods (PES) have examined using a truncated window to optimize, so that optimization can occur without a full unroll. PES aims to unbias the estimator by accumulating truncation noise during an online rollout, however, it produces high variance.
This paper presents a generalization to PES, GPES, that decouples the frequency of noise injection and gradient estimation. A specific instantiation, named NRES, samples the noise only once per episode. This reduces variance as it removes the need for noise accumulation.
Experiments are presented on a two-parameter chaotic Lorenz system, where NRES outperforms previous work. An RNN-based meta-learning setup is also evaluated wherein the transition function defines an update over inner parameters, along with a reinforcement learning setup to solve Mujoco tasks.
Strengths: This paper provides an extremely thorough investigation of their proposed framework. The framework includes a generalization of previous work, along with a theoretically-justified instantiation with desirable properties on unbiased estimation and low variance. This is an original idea that is promising. The quality and clarity of the writing is excellent. The work sets up for additional work in investigating the properties of GPES methods, including variations on the proposed NRES.
Weaknesses: The final experiments on meta-learning and reinforcement learning are lacking detail. A more precise experimental setup would strengthen the argument of the paper. In addition, comparisons to non-ES based methods would give more context on how such methods compare to other solutions in the field. The experiments present wall-clock time as the unified axis, but discussion in the paper mentions total computation budget -- a figuring comparing the methods using some consistent measure of computation would strengthen the paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: How does NRES compare to FullES in terms of pure computational budget? What are the scaling patterns of additional parallelization?
How do non-ES methods perform on the Mujoco tasks?
Additionally, ablations on hyperparameters such as the noise variance and # of workers would grant insight into the nuance of the method.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are discussed in section 7 -- they largely relate to ES in general.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the quality and clarity of our writing and the originality of our idea.
**Experiment details**.
We have provided detailed descriptions of the experimental set ups and how we have tuned the hyperparameters for each experiment in Appendix E. We will make this more obvious in the main paper.
**Comparison against non-ES based methods**.
The primary non-ES methods we compare against are automatic differentiation gradient estimation methods. As we have explained in line 58-63, AD methods struggle to be useful in problems with chaotic loss surfaces. For these problems, we have already compared against $4$ AD methods on the Lorenz task and the learned optimizer task in our experiments. Because AD methods all perform worse than ES methods (line 241 and 261), we show their results in Figure 8 and 9 in the Appendix.
**Measure of computation**.
There are naturally two ways to measure the amount of computation an optimization algorithm needs to reach a given performance level on unrolled computation graphs:
* **Measure 1**: the total number of unrolls, which characterizes the total amount of work (energy) used by the algorithm. In reinforcement learning, this measure is also treated as the empirical sample complexity.
* **Measure 2**: the number of sequential unrolls, which is linearly proportional to the theoretical wall clock time a parallel algorithm needs assuming good implementation and computing hardware.
Since we focus on parallel ES algorithms, our primary measure of computation is through Measure 2 (we use Measure 2 for the RL experiments in the paper). Empirically, we have also compared ES algorithms by the actual wall-clock time and calculated how many times NRES improves over other methods. This actual wall-clock improvement is a conservative estimate of NRES’s theoretical wall-clock improvement captured by Measure 2 because we only use a single GPU card (which has a limited amount of parallelization ability). However, the actual wall-clock speed up should approach the theoretical speed up (Measure 2) as we scale up on the amount of compute hardware.
**Comparing NRES and FullES in terms of pure computation budget**. Despite our focus on Measure 2, we also observe an improvement in Measure 1 (total number of unrolls) because of the variance reduction benefit of NRES over FullES (Theorem 8). We have provided the statistics of both measures for the RL experiments in Table 3 in the Appendix. In the Table below, we redisplay these results for RL and additionally provide these measures for the meta-training learned optimizer task in Figure 5(a). Here we see that NRES is not only significantly more parallelizable than FullES (Measure 2), but also reduces the total amount of compute/sample (Measure 1).
|| reach 0.61 loss in learned optimizer||solve Mujoco Swimmer||solve Mujoco Half Cheetah||
|-|-|-|-|-|-|-|
||# of sequential unrolls (Measure 1)|# of total unrolls (Measure 2)|# of sequential unrolls|# of total unrolls|# of sequential unrolls|# of total unrolls|
| FullES | $2.46 \times 10^6$ | $4.92 \times 10^7$ | $2.50 \times 10^4$ | $1.50 \times 10^5$ | $6.28 \times 10^5$ | $7.54 \times 10^6$ |
| NRES | $\mathbf{1.95 \times 10^4}$ | $\mathbf{4.00 \times 10^6}$ | $\mathbf{2.40 \times 10^3}$ | $\mathbf{1.11 \times 10^5}$ | $\mathbf{1.02 \times 10^4}$ | $\mathbf{5.81 \times 10^6}$ |
**Non-ES methods on the Mujoco task**.
As the Mujoco task’s transition dynamics do not support automatic differentiation, the most relevant methods are policy gradient methods, which require having stochastic policy and apply likelihood ratio (LR) gradient estimation in the action space. (In contrast, ES applies LR in the parameter ($\theta$) space and can handle deterministic policy). We compare the reported results by Rajeswaran et al which have used natural policy gradient to train linear policies on the two Mujoco tasks we have considered in this paper:
||Number of environment steps to solve||
|-|-|-|
||Swimmer|Half Cheetah|
|(Rajeswaran et al) + stochastic linear policy|$1.45 \times 10^6$|$1.13 \times 10^7$|
|NRES (ours) + deterministic linear policy|$\mathbf{1.11 \times 10^5}$|$\mathbf{5.81 \times 10^6}$|
As we can see, NRES improves over this policy gradient method in solving these Mujoco tasks.
Rajeswaran, A., Lowrey, K., Todorov, E. V., & Kakade, S. M. (2017). Towards generalization and simplicity in continuous control. Advances in Neural Information Processing Systems, 30.
**Ablation on the noise variance $\sigma$**.
As we have described in line 946, all ES methods require tuning the hyperparameter $\sigma$ which controls the variance of the smoothing distribution: on the one hand, setting $\sigma$ too small might not provide sufficient loss smoothing, making the optimization difficult to converge. On the other hand, setting $\sigma$ too large might shift the global optimum of the smoothed surface away from the true minimum. Following the reviewer’s suggestion for an ablation, we provide experiments on the impact of $\sigma$ on both NRES and FullES’s performance on the Mujoco Half Cheetah task in Figure 2 in the common response pdf (where we have tuned the learning rate for each value of $\sigma$ and each method separately). Here we see that although insufficient amount of loss smoothing ($\sigma = 0.001$) could lead to slow convergence, there exists a range of hyperparameters ($\sigma=0.004$ and $\sigma=0.01$) that can allow both ES methods to solve the task. For both cases, NRES improves significantly over FullES.
**Ablation on the number of workers $N$**.
As the NRES workers are independent, the variance of the average of their gradient estimates scales with $1/N$. We provide an experiment on the impact of $N$ on NRES’s performance on the Mujoco Swimmer task in Figure 3(a) in the common response pdf. We see that having more workers can reduce the number of sequential steps to converge but also require a greater cost per sequential step.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of Rebuttal
Comment: Thank you for the detailed response and the additional experimental results. It would be great to include these results in a revised version of the paper as well, for future reference. Given that the score already indicates accept, I will maintain this score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer's Acknowledgement
Comment: Thanks for your prompt reply! We will make sure to incorporate these added results in our revision.
If you believe we have answered your questions to the degree that you'd feel comfortable raising your _confidence_ score, we would really appreciate it, but in any case we want to express our gratitude again for your positive feedback and suggestions to improve our work. | Summary: In this paper, the author(s) extended the well-known Persistent Evolution Strategies via noise-reuse and proposed an improved version with reduced variance. The main contribution of this paper is to provide detailed mathematical proof to validate their claim.
Strengths: Considering this significant contribution to the evolution strategies and other related ML communities, I personally suggest accepting this high-quality research paper.
In this paper, the author(s) also discussed the main limits of their method. These discussions about the possible limitations (hysteresis and complexity) are highly encouraged for academic research because this can better reflect the whole state of their method.
Furthermore, the author(s) pointed out that “whether there are better ways to leverage the sequential structure” beyond the isotropic Gaussian distribution. I agree that this is an interesting open question (worth to be investigated).
Weaknesses: Some notes are given in order to further improve this paper:
For Section 5.3 Reinforcement Learning, the removal of “additional heuristic tricks” can help us focus on the underlying mechanism, which is very critical for algorithmic understanding, no matter from a practical or theoretical perspective. We often prefer general-purpose design principles, though these “additional heuristic tricks” sometimes work well in some cases. Given that only the linear policy was used in experiments, it is highly expected to include also the more challenging non-linear policy with higher dimensions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: No comments here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating the significance of our contribution.
**Non-linear RL policy**. We initially experimented with linear policies on the Mujoco tasks because
1. it has been observed by Rajeswaran et al that using linear policies can yield performances comparable to state-of-the-art results on continuous control (Mujoco) tasks;
2. we want to keep our experiments consistent with prior works on ES (Mania et al, Vicol et al), which also experiment with linear policies.
However, following your suggestion, we have conducted an additional experiment where we train a multi-layer perceptron policy used by Schulman et al on the Half Cheetah task, which thereby increases the total number of parameters by $6.5\times$. We tune each ES method’s SGD learning rate individually and report the result in Figure 3(b) in the attached pdf in the common response. Here we constrain all the methods to use the same number of total environment steps. We notice that among the four methods, only NRES has successfully solved the task. For the rest of the methods, although FullES comes close to solving the task, it requires $50\times$ more theoretical wall clock time than NRES. Besides, both PES and TES are far away from solving the task under the same budget. This result demonstrates NRES’s ability to learn non-linear, higher-dimensional policy for reinforcement learning and also provides additional evidence of NRES’s significant advantage over other ES methods. Thanks for the suggestion to explore this direction.
Mania, H., Guy, A., & Recht, B. (2018). Simple random search provides a competitive approach to reinforcement learning. Advances in Neural Information Processing Systems, 31.
Rajeswaran, A., Lowrey, K., Todorov, E. V., & Kakade, S. M. (2017). Towards generalization and simplicity in continuous control. Advances in Neural Information Processing Systems, 30.
Schulman, J., Levine, S., Abbeel, P., Jordan, M., & Moritz, P. (2015, June). Trust region policy optimization. In International conference on machine learning (pp. 1889-1897). PMLR.
Vicol, P., Metz, L., & Sohl-Dickstein, J. (2021, July). Unbiased gradient estimation in unrolled computation graphs with persistent evolution strategies. In International Conference on Machine Learning (pp. 10553-10563). PMLR. | Rebuttal 1:
Rebuttal: We want to thank all the reviewers for your reviews and comments. We address each reviewer’s questions and feedback in an individual response. For new plots created for the rebuttal, we have included them in the uploaded pdf in this common response. Below is a summary of the new plots in the pdf:
* **Figure 1(a) (Reviewer hYKW)**: We compare ES gradient estimators on an $11 \times$ higher-dimensional ($d=19330$) learned optimizer task than that considered in Figure 5(a). To increase the parameter dimension, we increase the width of the multilayer perceptron used by the learned optimizer. For this higher-dimensional problem, NRES can still provide a $3.8\times$ wall clock time speed up over PES and a $9.7 \times$ wall clock time speed up over FullES.
* **Figure 1(b) (Reviewer hYKW)**: We compare ES gradient estimators on the learned optimizer task with a $10\times$ longer sequence length ($T=10000$) than that considered in Figure 5(a). For this problem, NRES still achieves a $2.1 \times$ wall clock speed up over PES and a $3.5 \times$ speed up over FullES.
* **Figure 2 (Reviewer FJdn)**: We perform an ablation study on the impact of the noise variance $\sigma^2$ on the performance of FullES and our proposed method NRES in solving the Mujoco Half Cheetah task. While setting $\sigma$ too small makes both methods fail to to solve the task, there still exists a range of larger $\sigma$ values under which both methods can solve the task successfully. For these cases, NRES always achieves a more than $50\times$ reduction in the number of sequential steps used over FullES.
* **Figure 3(a) (Reviewer FJdn)**: We perform an ablation study on the impact of the number of workers $N$ on the performance of NRES in solving the Mujoco Swimmer task. Here, increasing $N$ can help NRES use fewer sequential steps to solve the task but at a larger per sequential step compute cost.
* **Figure 3(b) (Reviewer 77pb)**: We compare ES gradient estimators on training a non-linear, ($6.5\times$) higher-dimensional policy network on the Mujoco Half Cheetah task under a fixed budget of total number of environment steps. Only our proposed method NRES has solved the task.
Pdf: /pdf/4a854429a7e24a8ccde33980ff47984c232fed1d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies online evaluation strategies for unrolled computation graphs. Especially, the authors 1). propose a general class of unbiased online evolution strategies that generalizes Persistent Evolution Strategies (PES), named Generalized Persistent Evolution Strategies (GPES). The key idea is to share noise across truncation windows instead of sampling every round. 2). characterize the variance and variance reduction properties of their strategies. 3). study a special case of the general class strategies, named Noise Reuse Evolution Strategie (NRES), show the variance advantage of NRES over other estimators. 4). experimentally show the advantages of NRES across a variety of applications, including learning dynamical systems, meta-training learned optimizers, and reinforcement learning.
Strengths: - The idea of reusing noise to reduce variance is interesting and a bit counterintuitive. For first order method, reusing noise always lead to a larger variance (If the noise is reused, averaging gradients cannot reduce the variance of the noise). However, it seems for evolution strategy, such a simple method could significantly reduce variance and improve the performance.
- Experimental results are comparable to the previous methods and show a remarkable improvement.
- The paper is well writen. The proof in the appendix is well organized and seems correct.
Weaknesses: - No comparision between NRES and first order method.
- The authors did not explain the some parameter setting in the experiments. Especially, in Figure 5 and 6, why choose a different N when comparing online ES and FullES?
- For online ES, the algorithm update $\theta$ every truncation window, thus the loss will never be in the form of (7). In this case, it seems NRES is still biased.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Given a time $T$, FullES returns one gradient g=[L([\theta+\epsilon]_{\times T})-L([\theta-\epsilon]_{\times T})]{\epsilon}/{2\sigma^2}, and NRES (Algorithm 2) returns $T/W$ gradients $g_1,\dots, g_{T/W}$, where g_k=[\sum_{t=Wk+1}^{Wk+W}L_t([\theta+\epsilon]_{\times t})-L([\theta-\epsilon]_{\times t})]{\epsilon}/{2\sigma^2 W}. By definition, there is $g=W/T \sum_{k=1}^{T/W} g_k$. In this case, if we choose learning rate $\eta_{FullES} = \eta_{NRES}T/W$, there should be no difference between FullES and NRES. So why is there such a big improvement in the experiments?
One minor typo: Algorithm 6 line 10: not $L_t^s$, should be $L_{self_.\tau +1}^s$.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Theoretical work. No limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comparison between NRES and first order methods**. We have provided discussions of automatic differentiation (first order) methods in Section B Additional Related Work in the Appendix. Besides, we have provided empirical comparisons between NRES and 4 different first order AD methods on the Lorenz task and the learned optimizer task in our paper. As these first order methods all perform worse than ES methods (line 241 and 261), we have deferred their results to Figure 8 and 9 in the Appendix.
**How $N$ is chosen for the experiments**. As we have described on line 188-190, because a FullES worker’s gradient estimate uses $2T$ steps while an NRES worker uses $2W$ steps, we by default maintain a ratio of $T/W$ between the number of NRES and FullES workers to keep the per-update number of unrolls constant. This is how we choose N for online ES and FullES on the Lorenz task (Figure 4(b)), the Swimmer task (Figure 6(a)), and the Half Cheetah task (Figure 6(b)). For the learned optimizer task (Figure 5(a)), we empirically find that we can use significantly fewer online ES workers to achieve good performance – we only need $200$ total unrolls ($100$ NRES workers each antithetically unrolling for $1$ steps) per meta-gradient update. Because $200$ total unrolls is shorter than the minimum number of unrolls FullES needs per update (at least $2\times T = 2000$ unrolls), we have to relax the default number of worker ratio and allow FullES to use more workers (we choose $N=10$ after tuning it among $\\{1, 3, 10\\}$ (line 840-842).
**Bias in NRES**.
As we have discussed in the paragraph on _hysteresis_ (line 105-113), it is indeed correct that, as we update $\theta$ every truncation window, the loss in the current window has some historical dependence on past values of $\theta$ (hysteresis), making NRES gradient biased in practice. However, it is worth noting that all the prior works on online gradient estimation methods (including TES, PES and first order methods like UORO and DODGE) also have bias from hysteresis. Although we didn’t observe much impact of hysteresis in our experiments, we believe understanding and correcting hysteresis is an interesting direction of future work as we have discussed in line 323-325.
**Understanding the improvement of NRES over FullES**.
Given the same computation budget of $2T$ unrolls, we can use it either to produce one FullES gradient estimate or to produce $T/W$ i.i.d. NRES gradient estimates. The reviewer’s current description assumes the same NRES worker sequentially produces these $T/W$ gradient estimates – instead, these $T/W$ NRES gradient estimates are produced by independent workers and thus can happen simultaneously. In contrast, the single FullES worker has to unroll sequentially for $T$ steps from start to finish. Thus NRES can complete the same amount of total unrolls with $T/W \times$ less time than FullES (see line 191-195). This ability to achieve extra parallelization is the primary reason we see such a big improvement of NRES over FullES in the experiments.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. It seems that the biggest improvement of NRES over previous works is the parallelization ability.
Given the score indicates accept, I meantain this score.
---
Reply to Comment 1.1.1:
Comment: Thanks very much for your reply and your recommendation to accept the paper.
Following the reviewer’s remark about the reason for the improvement of NRES, we briefly summarize the main benefits of NRES over prior works (also illustrated in the table in Figure 1(b)). NRES improves over prior ES methods for a number of reasons:
1. As the reviewer mentions, _NRES improves over FullES_ because NRES is an online method unlike the offline algorithm FullES and it thus has much __better parallelization__ ability than FullES. The gradient estimate from NRES can also have lower variance than FullES under the same total compute.
2. _NRES improves over the online method TES_ because TES suffers from truncation bias whereas NRES is __unbiased__.
3. _NRES improves over the online method PES_ because NRES has a significantly __lower variance__ than PES due to its noise-reuse property.
These properties together make NRES a particularly compelling approach relative to prior ES methods. | null | null | null | null | null | null |
Estimating Riemannian Metric with Noise-Contaminated Intrinsic Distance | Accept (poster) | Summary: The paper presents a novel mechanism for learning a Riemannian metric from distance observations. This is important for applications where relative observations are available (e.g. "objects x1 and x2 are different, while x2 and x3 are similar"), such as perception studies. The approach is based on local regression from which a Riemannian metric is deduced from a Taylor expansion of the geodesic distance. Limited experimental results demonstrate feasibility.
Strengths: * The paper approaches an important question that is highly understudied in the machine-learning community.
* The proposed Riemannian metric estimator is new and sufficiently simple to be practically useful.
* It is neat that the estimator also comes with a direct estimate of the Cristoffel symbols as these can otherwise be tedious to compute.
* The examples given throughout the paper are instructive and provide a nice assistance to the reader.
* The illustrative examples are instructive.
Weaknesses: * The approach seems to be difficult to scale to higher dimensional data. I don't think this is a significant problem as the current application areas where distance observations are available often revolve around low-dimensional observations, e.g. in perception studies in psychology.
* The computational procedure seems to rely on a discretization of the input space, which will only work in low-dimensional cases.
* The kernel smoothing used to produce the Riemannian metric (eq. 3.10) is rather ad hoc. It would have been nice if this was more closely tied to the local regression. That being said, the approach basically follows established procedures, see e.g. [10].
### Related work
* I am not fond of the phrasing of lines 37-39, which suggests that the proposed work is the first time that Riemannian metrics are learned. The cited work of Lebanon [14] and Hauberg et al. [10] follows the same philosophy as the present paper, so I think a rephrasing would be appropriate.
* The authors are not the first to learn metrics from observations of distances. I think it would be good to at least acknowledge the vast literature on multi-dimensional scaling, which does not learn a Riemannian metric but learns from distances. Perhaps closer to the present work is the paper "Isometric Gaussian Process Latent Variable Model for Dissimilarity Data" (Jørgensen et al., ICML 2021) which learns a low-dimensional manifold and its Riemannian metric from noisy distance observations. The taken approach is quite dissimilar from the present paper, but the ambition is similar.
### Minor things
* Typo in line 219: "without or without" --- I suppose this should be "with or without".
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * What does the $\circ$ operator denote in Eq. 2.1? As I read the equation, all quantities are scalars, so I do not think that this denotes a Hadamard product, but this is nonetheless my best guess.
* I do not quite understand Eq. 3.2: Are there no further restrictions in $\epsilon_n$? If so does that not mean that $P(Y < 0) > 0$, which is rather odd for distance observations which must be strictly non-negative.
* There has been quite extensive work on pull-back metrics in autoencoder-like models (see e.g. "Latent Space Oddity: on the Curvature of Deep Generative Models", Arvanitidis et al., ICLR 2018). These can be viewed as smooth interpolations of local linear regressions since the metric is formed by Jacobian matrices. Do you see links between this line of work on the present paper?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The main paper only makes a brief mention of the limitations, while a more extensive discussion is given in the supplements. I appreciate the supplementary discussion but would encourage the authors to move this to the main paper to increase openness regarding limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: For concerns regarding scalability to high-dim data, we kindly refer the reviewer to our general point 1. We also thank the reviewer for pointing out many related works, as discussed in our general point 3, we will add citations accordingly. We also discuss the positivity question in the general point 8. We also fix the typo in line 219 (for a camera-ready version hopefully), thank you for the careful inspection.
The $\circ$ operator in eq. (2.1) is function composition, i.e., $f \circ g (t) = f(g(t))$. Adopted to avoid too many brackets especially in our case with many sub/superscripts.
## Ad hoc kernel smoothing in eq. (3.10)
The objective (3.7) and weights in equation (3.10) are motivated by the Taylor expansion (3.5) similarly to that in the local polynomial regression (e.g., section 5.4.3 of \[6\]). The weights (3.10) are relatively standard, treating the $(\delta_{u0}^1, \dots, \delta_{u0}^d, \delta_{u1}^1, \dots, \delta_{u1}^d)$ as a $2d$-array. This method has a solid theoretical foundation as established in our Proposition 4.1. One key difference between our proposal to that in \[10\] is that the weights are applied differently. We estimate the Riemannian metric tensor directly and the weights (3.10) are involved in the objective function (3.7), while the method in \[10\] constructs the metric tensor by smoothing multiple local distance metric (see eq. (6) in \[10\]), in a similar manner as our post-smoothing step (section S3 in the Supplement).
## Potentially misleading claim in lines 37-39
Lines 37-39 (and this paragraph spanning lines 34-42) emphasize that it is the *Riemannian metric* that we are targeting which is different to the *distance metric* as in the classic metric learning. It does not imply that this is the first time Riemannian metrics are learned, as we have referred to multiple references (as in lines 22-26) including those you kindly pointed out.
## Positivity
While the estimated Riemannian metric in a finite sample is not guaranteed to be positive definite, our asymptotic theory (proposition 4.1) shows that the estimated metric will be positive definite with probability tending to 1 given sufficient observations (near the point of estimation). In our experiments we rarely had estimates that were not positive definite and we did not encounter serious non-positive issues as shown in the last part of section S4.5 (page 15) and figure S4.10 of the Supplement. One could in principle adopt constraint optimization to enforce positive definiteness as that in the metric learning, but for simplicity we chose to employ an unrestricted algorithm that is more computationally efficient.
Model (3.2) is general in that it can include both positive and negative distances, but in practice only positive distances are observed and our model is applied on these more intuitive setups. One could switch the link function and/or the loss function in a similar manner as in local quasi-likelihood [(Fan, Heckman, and Wand 1995)](https://www.tandfonline.com/doi/abs/10.1080/01621459.1995.10476496) to ensure strictly non-negative distance. But again, we chose to present the simplest model here.
## Link to works on pull-back metrics in autoencoder-like models
The pull-back approach tackles a similar problem recovering data space geometry. It utilizes the Jacobian matrix of some smooth mapping $f: \mathcal Z \to \mathcal X$ to pull the metric on $\mathcal X$ back to $\mathcal Z$. Typically the mapping $f$ is a generator/encoder, while $\mathcal Z$ and $\mathcal X$ are some latent space and the data space respectively. It is usually assumed that the metric $\mathcal M_x$ of input space is readily available, e.g., assumed to be Euclidean ([Arvanitidis, Hansen, and Hauberg 2018](https://openreview.net/forum?id=SJzRZ-WCZ)), or estimated via Riemannian metric learning ([Arvanitidis, Hauberg, and Schölkopf 2020](http://arxiv.org/abs/2008.00565)).
Our proposal can provide the metric of latent space directly based on similarity measures in the data space in a manner similar to our MNIST example. The major difference is that our method treats the low-dim embedding as a coordinate chart, which is a stricter assumption: the pull-back metric only requires smooth mappings between the data and latent space (e.g., generator or encoder).
---
Rebuttal Comment 1.1:
Title: Thanks for the follow-up
Comment: I appreciate the rebuttal. In particular, I had not seen the supplementary discussions which were pointed out. I found these helpful.
I have bumped my score a bit to reflect that I think the paper has merit and it should be published.
I think issues regarding scaling to higher dimensions remain, but given the limited work done in this field, I can accept that early papers are incomplete.
---
Reply to Comment 1.1.1:
Comment: Thank you for your follow-up and appreciation of our work! | Summary: The paper aims to extend metric and manifold learning by learning Riemannian metrics from functions of the observed data that are not related to an embedding space metric as is the usual case. Assuming there is an underlying Riemannian metric structure and that the observed dissimilarity is a known function of this structure, the authors show how a Riemannian metric can be learned from dense data. The method is tested on trip time data in New York and MNIST represented on a 2d manifold.
Strengths: - methods for estimating a Riemannian metric from dissimilarity measures are derived
- in principle, it can be a good idea relying on other distances than embedding space distances to learn geometric structure
Weaknesses: - learning Riemannian metrics has a long history in the literature, although not in this exact form
- I am unsure of the usefulness of the method:
-- are there any guarantees that a Riemannian metric that is compatible with the observed structure exists? I believe if the objective is actually a (sufficiently smooth) distance there are existence results, but it would be nice with arguments for the more general cases in sec 3.1
-- it is assumed that a chart is known. One can always find a mapping to a lower-dimensional subspace such as is done in the MNIST example, but whether the data is actually 2d is unknown, and the uncertainty in the chart estimation is not taken into account
-- in practice, the result would likely be a Riemannian metric approximation in a chart that is estimated with high uncertainty and with strong assumptions on the dimensionality. The data would likely not be dense in higher dimensions, and so the estimated structure would very likely be a poor approximation. This can then be used for e.g. geodesic interpolation, but whether this is actually useful is not clear to me
-- what happens if the model is misspecified? The link functions are assumed to be known a priori, right?
- the developed estimation techniques have merit, but the fact that a Riemannian metric can be recovered from the metric (distances) is not a new result
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Convincing counterarguments to the weaknesses listed above.
- do you require the charts to be normal? If so, that restricts the set of metrics you can possibly learn, right? I.e., it could be that a true underlying Riemannian structure was not orthonormal with respect to the chosen chart
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: the limitations of the method could be described more precisely
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We acknowledge the existence of related literature learning Riemannian metrics in lines 22-26 and will include more as suggested by reviewers. We hope the proposed framework can shad new light on this topic. See also our general response point 3. See our general point 4 for limitations.
## usefulness of the method
As we discussed in the general point 5, existence results for such compatible metric do exist. [Fefferman et al. 2020](https://doi.org/10.1137/19M126829X) also discussed a manifold reconstruction problem similar to our additive error case (eq. (3.2)). Their discussion focuses on abstract metric spaces with no pre-specified coordinates, while our setup is simpler as we assume that the coordinate chart is readily available.
Indeed lower-dimensional subspaces are always obtainable while the question is how faithful they would be to present the original data. We consider this a modeling choice while specifying the coordinate chart as a step in, e.g., tSNE. In fact, assessing the uncertainty of the proposed models (3.1) could also potentially provide insight to the quality of dimension reduction: for example, a large uncertainty may suggest poor coordinate representations.
The link function is pre-specified and the model can be misspecified. Though, our focus is the connection in (3.1) and (3.5), therefore only those simpler models (3.2) – (3.4) (as commonly seen in generalized linear models) are included. As we discussed in the general point 1, more flexible models are possible under the proposed framework, and that the effect of the link function is expected to be small.
## Do you need normal coordinates?
No, the chart need not be normal. See line 140 of the main text. The Christoffel symbols are coordinate dependent and vanish under normal coordinates. Our method does not assume this and the Christoffel symbols are estimated.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. My scoring has not changed.
---
Reply to Comment 1.1.1:
Comment: Thank you for your review of our work. | Summary: This paper develops a theory for estimating the Riemannian metric tensor for a given set of observations and some addition information. This additional information includes a (noisy) measure of similarity between the given points in a pairwise fashion. Examples of this information includes the geodesic distance, or a binary response about the similarity of types, or a binary response about relative comparison.
The formulation relies on a formula for the intrinsic distance between corresponding points on two geodesics shot from the same base p. This formula approximates the said distance using the Euclidean chords, Riemannian metric tensor, and the Christoffel symbols. The first term computes the Riemannian metric between the shooting vectors and the second term accounts for the curvature. The paper seizes on this linear approximation and sets up a regression problem for estimating these matrices from the given data. In this way, it estimates the metric tensors and the Christoffel symbol at point p. The paper further develops the estimation theory (bias and variance) for the metric tensor estimation.
This theory is demonstrated using several experiments, some based on simulated data and some on real data sets. The simulated data helps validate the method for a known geometry (sphere, double spiral). The real data experiments involve learning metrics for the taxi travel times and MNIST images embedded in R^2 using tSNE.
Strengths: -- The problem of learning Riemannian metric from the data is quite important and challenging.
-- The paper formulates this problem in a novel and interesting way and provides analytical insights, rather than the current systems that apply deep neural networks to such problems.
-- The paper goes on to develop estimation theory for the metric tensor. There is some interesting statistical contributions here.
Weaknesses:
-- The simulation experiments seem to involve manifolds with constant curvature (unless the curvature changes along the spiral). Perhaps the authors can try their method for ellipsoids or some variable curvature manifolds.
-- How does the method work if the sampled points on the manifold are sparse? I feel that the linear regression model derived here requires the manifold points to be close otherwise the errors will start piling up.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors:
-- I believe that the equation connecting geodesic distance to the Mahalanobis distance is the same kind that is used to derive Cramer-Rao bound on estimation error using the Fisher-Rao Riemannian metric. It would be interesting to make that connection if there is one.
-- What are the practical situations where the pairwise geodesic distances between points on a manifold are given, and one does not know the Riemannian metric? I understand the taxi example but for the MNIST example one has to use an embedding which is somewhat of an arbitrary choice.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations:
The authors have not explicitly address the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: For the practical problems with pairwise distance but no Riemannian metric, we kindly refer to our general response point 2. See also our general point 4 for limitations.
## The simulation experiments seem to only involve manifolds with constant curvature.
Our theoretical foundation allows for general non-constant curvature and this is the backbone that makes the method work for practical applications as demonstrated. The proposed method applies to manifolds with variable curvature like an ellipsoid, and we can showcase this via an ellipsoid example if accepted and time permitted.
## How does the method work if the sampled points on the manifold are sparse?
Sufficiently dense data is indeed necessary for the proposal, which could be demanding for high-dimensional data due to the curse of dimensionality. However, data often exhibits the manifold phenomenon, namely data intrinsically lie close to a low-dimensional manifold (see also our general point 1). Thus as illustrated in the MNIST example, we can apply the proposed method after a dimension reduction step such as tSNE. The resulting representations tend to be dense since they lie in a low-dimensional space. This can substantially alleviate the curse of dimensionality, and the dense local neighborhood requirement will more likely hold true.
## Connection to Fisher-Rao metric
When working with a statistical manifold, namely a parametric family where each point is a distribution, the Fisher-Rao metric is the canonical Riemannian metric which enjoys nice statistical interpretation. Its component matrix is the Fisher information, the inverse of which becomes the Cramer–Rao lower bound. Amari has shown that the Kullback–Leibler divergence (like a distance) is a function whose second differential produces the Riemannian metric (e.g., [Murray Rice 1993](https://books.google.com/books?id=ZBa7F9LrDrMC&pg=PA76#v=onepage&q&f=false)). Though, we are uncertain whether there would be a direct connection to our work estimating the metric tensors, while the Fisher-Rao's are based on parametric family. | Summary: This paper proposes a method to estimate the Riemannian metric of data space when coordinate representations of each data point and some noisy similarity measurements among data points are provided. The similarity measurements of different types, such as noise-contaminated distances, similarity/dissimilarity labels, and comparative similarity labels, are probabilistically modeled as functions of intrinsic distances between data points. The squared geodesic distances are approximated to be linear to the Riemannian metric and Christoffel's symbol via Taylor's expansion, leading the metric estimation problem to be like a maximum likelihood estimation using a generalized linear model with each entry of the Riemannian metric and the Christoffel's symbol as its parameters. Asymptotic convergence rates of the bias and variance of the estimator are derived for the case where noisy distances are given as the similarity measure. Experiments are performed using some simulated data, New York taxi trip duration data, and MNIST data sets to demonstrate the proposed estimator's benefits in capturing the underlying geometry of the data space.
**Post rebuttal** I have increased my score from 4 to 5. Lingering concerns involve providing more practical applications and comparisons to other methods.
Strengths: * The paper is in general well-written.
* The paper provides some interesting ideas to characterize the Riemannian geometry of data space by utilizing noisy similarity measurements among data points, such as continuous noisy distances, binary similarity labels, and binary relative comparison labels, in a unified framework.
* The proposed method to estimate the Riemannian metric is quite simple, seems original, and shows reasonable experimental results.
* This paper provides complete proof of a proposition for the local approximation of the geodesic distance and that for the asymptotic convergence analysis of the estimator.
Weaknesses: * The considered problem setting seems to be quite restrictive. It would be rare that a problem provides both low-dimensional coordinate representations and meaningful intrinsic distances among data points. Obtaining each of them has been an important research topic for decades.
* The experiments need to provide practical applications of the proposed method. For example, obtaining geodesics in the experiments is usually done to demonstrate the validity of the estimated Riemannian metrics but not for further use. Suggesting more practical uses for real data based on the estimated Riemannian metric, computed geodesics, or other subsequent geometric quantities such as lengths or volumes, would significantly benefit the community.
* This paper lacks any comparison with other Riemannian metric estimation methods. A direct comparison would be possible to the method in Perrault-Joncas et al., which considers a mapping $f$ from the data submanifold embedded in the ambient space (endowed with an ambient space metric) to a coordinate chart and estimates the push-forward metric in the coordinate chart via the mapping $f$.
* Experiments in high-dimensional settings are lacking. How the method behaves according to the dimension would be valuable information. A worry here is that local regions to estimate the Riemannian metrics would increase exponentially with respect to the dimension.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * When extracting the Riemannian metric from binary similarity labels or binary relative comparison labels, there may not be any ground truth Riemannian metrics. Is it then appropriate to call the proposed framework an 'estimation'?
* How can the proposed probabilistic models be justified? Distance scale may significantly affect the results according to the current choice of models in (3.3) and (3.4). Also, different options of probabilistic models would result in other Riemannian metrics.
* Is there any guarantee that the local squared distance in (3.5) is always positive when the estimation is performed based on (3.7)? This may also be related to the point that the estimated metric is not guaranteed positive-definite. I wonder if solving for the case (3.11) and (3.12) can make the estimated squared distances, i.e., $\eta_u$s, negative.
* There are no explanations for obtaining estimates for Christoffel's symbols at arbitrary points, i.e., $\hat{\Gamma}\circ \gamma(t)$, which are required for obtaining geodesics.
* Regarding the post-smoothing of the obtained Riemannian metrics explained in the supplement, the weighted averaging of Riemannian metrics seems strange. For example, we cannot simply add tangent vectors on different tangent spaces but should apply parallel transport for them to be on the same tangent space.
**[Minor comments]**
* It must be explicitly stated in the main text the location of the supplement containing the proof of each proposition.
* Since the proposed approximations and estimation methods are valid only locally, it would be better to explain how the methods can be applied globally in the main text, not in the supplement.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations of the paper. For other possible limitations, please refer to the weaknesses and questions above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
For the weakness pointed out regarding restrictive setup, application, comparison with other methods, and dimensionality, we kindly ask the reviewer to refer to our general response points 1--3. See our general response point 5 for compatibility of the metric.
The reviewer is correct that the proposed method (and most if not all local polynomial based approaches) do not scale well with increased dimension. That is why this work is based on a low-dim manifold assumption as discussed in general point 1. As we demonstrated in the MNIST example, it successfully preserve the geometry of high-dim image data with low-dim spaces.
## How can the proposed probabilistic models be justified?
Metric estimated under a constant multiple of the distances (scaling) will only differ up to multiplication of a constant. This is a special case of conformality. For example, one can follow the definition of the Christoffel symbol (line 88) to see that it is invariant to different distance scales. In other words, the different distance scales would act like unit conversion and not affect the geometry features like angles, shapes, and the geodesic curves (but its length would differ).
Employing different probabilistic models (specifically $g$ and $Q$) may lead to different estimated metrics, but the estimated metric will change only if the derivative of the link at 0 changes by the chain rule. Like that in the generalized linear models, this is a necessary modeling component which would be determined by data type and domain knowledge. Our contribution focuses on the possibility of estimating the metric under such framework with a spread of geodesics, for which simple models as in (3.2) – (3.4) were considered.
## Positivity
While the estimated Riemannian metric in a finite sample is not guaranteed to be positive definite, our asymptotic theory (proposition 4.1) shows that the estimated metric will be positive definite with probability tending to 1 given sufficient observations (near the point of estimation). In our experiments we rarely had estimates that were not positive definite and we did not encounter serious non-positive issues as shown in the last part of section S4.5 (page 15) and figure S4.10 of the Supplement. One could in principle adopt constraint optimization to enforce positive definiteness as that in the metric learning, but for simplicity we chose to employ an unrestricted algorithm that is more computationally efficient.
Model (3.2) is general in that it can include both positive and negative distances, but in practice only positive distances are observed and our model is applied on these more intuitive setups. One could switch the link function and/or the loss function in a similar manner as in local quasi-likelihood [(Fan, Heckman, and Wand 1995)](https://www.tandfonline.com/doi/abs/10.1080/01621459.1995.10476496) to ensure strictly non-negative distance. But again, we chose to present the simplest model here.
## There are no explanations for obtaining estimates for Christoffel's symbols at arbitrary points.
Following (3.6) – (3.9), the Christoffel symbols can be estimated simultaneously with the metric, and analogous post-smoothing steps (see section S3) also apply. Post-smoothing is adopted mainly to speed up the geodesic computation. So that we do not need to re-estimate the tensors at every point as requested by the ODE solver, which is time consuming. We also notice the ODE solver (and the resulting geodesic curves) benefited from the added smoothness especially for the cases with binary similarity measures.
## The weighted averaging of Riemannian metrics seems strange in the post-smoothing step.
Since we are working on a chart, post-smoothing is applied on the component functions instead of on the (coordinate-free) tensor directly, so comparison of tangent vectors on different tangent spaces is not involved. More specifically, under the local coordinate chart $(x^1, \dots, x^d)$, the estimated metric tensor is fully spelled out as $G_{ij} dx^i dx^j$. We post-smooth the estimate of the (continuous) component functions $G_{ij}, i, j = 1, \dots, d$. Implicitly, we assume the coordinate chart contains the data domain, so that transition maps (to bridge different local coordinate charts) are not involved.
## Minor comments
- *It must be explicitly stated in the main text the location of the supplement containing the proof of each proposition.*
We will explicitly state this in the camera-ready version if accepted. Thank you for pointing this out.
- *Since the proposed approximations and estimation methods are valid only locally, it would be better to explain how the methods can be applied globally in the main text, not in the supplement.*
We will add a short sketch of the post-smoothing and refer to the Supplement in the main text.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author for the responses. Many questions have been answered, but there still exist some concerns as follows:
- Applications beyond obtaining geodesics are lacking.
- About justifying the probabilistic models: what bothered me was that the output of models in (3.3) and (3.4) can depend on the scale, i.e., too close distances will make Y = 0 for all data points and vice versa. This may not affect the estimation, though. Can you elaborate more on 'the estimated metric will change only if the derivative of the link at 0 changes by the chain rule'?
- Comparisons to other benchmarks seem required.
- I still think weighted averaging of the Riemannian metrics is a bit *ad hoc*. Mentioning tangent vectors was just an example. We typically do not sum Riemannian metrics defined on different points.
Due to the above aspects, I will maintain my score for now.
---
Reply to Comment 1.1.1:
Comment: To respond to your additional concerns:
- We also demonstrated an application in Sec 6 on estimating the underlying cost of travel through estimating the Riemannian metric.
- We may now better appreciate the reviewer's question. Model (3.3) and (3.4) are postulated for all pairs or triplets of data points, and there is no specification of scale involved. For geometry, because the Riemannian metric is a local quantity, it depends on the pairwise distances only through those measured for close-by points, so in some sense only the responses for these pairs/triplets of points are important. However, this is quite different from saying that the model depends on scale because the postulation of the models is scale-independent.
Only the estimation process involves scale, namely the bandwidth $h$ in (3.10), as a tuning parameter for the bias-variance tradeoff. We take the analogy of nonparametric regression where there is a single underlying target (the conditional mean function) while the bandwidth is a device to approach a consistent estimate. Again, here the scale is only needed for the estimation.
If the distance between two locations $X_{uj}$ is too close, the response $Y$ will most likely be 0, so too small a tuning parameter $h$ will result in no variation in the neighborhood, blowing up the estimation variance. However, when the bandwidth is well chosen according to bias-variance tradeoff (namely, trading off the availability of the data and the neighborhood radius), there will be moderate variation in the response within the neighborhood, and thus the estimation will be consistent. This is also the phenomenon we saw from the numerical demonstrations.
By (4.1) and the model (3.1), we can derive that if $X_{u0}$ and $X_{u1}$ are close, $E(Y_u\mid X_{u0},X_{u1}) \approx Dg^{-1}(0) \times \delta^i_{u,0-1}\delta^j_{u,0-1}\beta_{ij}^{(1)}$, where $Dg^{-1}(0)$ denotes the derivative of $g^{-1}$ at 0. The targeted Riemannian metric will thus change only if the derivative of the link at 0 changes.
- There is a lack of methods and implementation which are close in the geometric perspective to our method, even though methods for producing geodesics are quite common. For comparability issues, we did not include other methods as benchmarks.
- The Riemannian metric is a smooth tensor field, so interpolating/smoothing over nearby location is well justified both theoretically and practically. Our choice is for computation consideration, and in principle, our method can be implemented without interpolating the metrics. | Rebuttal 1:
Rebuttal: We truly appreciate reviewers' careful read of our manuscript and helpful comments. We summarized common weakness/questions and provide below our general response. See our replies to individual reviews for specific responses.
## 1. Restrictive problem/model setting
*The reviewers question that our settings could be strict: a. low-dimensionality and meaningful distance, b. probabilistic formulation in eq (3.2) – (3.4), and c. a pre-specified link function.*
1. Our method and calculation are designed for data drawn from a manifold with low intrinsic dimensions. The observed similarity measures are generated based on intrinsic geodesic distance, as in eq (3.1). The low-dimensional manifold assumption ([Bengio et al. 2013](https://doi.org/10.1109/TPAMI.2013.50)) is commonly satisfied by real-world image and audio datasets because of the manifold phenomenon, and also satisfied in perception studies in psychology as pointed out by a reviewer. The manifold assumption enables us to apply nonlinear dimension reduction and avoid the curse of dimensionality, even though the raw data, like images, live in a high-dimensional ambient space.
2. The probability model is proposed as a principled guide for method development and also for theory. The specific models enlisted in Example 3.1 and Example 3.2 are given as commonly encountered scenarios that can be handled by our framework; the proposed method is not limited to these models.
3. The choices of link function $g$ and loss function $Q$ are flexible and can accommodate a variety of data generating mechanisms similar to what generalized linear model (GLM) can handle. We expect the effect of the link function to be small because the estimation is done locally and only the derivative of the link function at the origin matters.
## 2. Lack of practical application/importance
We demonstrated two applications from transportation and computer vision. We expect our method to also be widely applicable in perception studies but the authors are not familiar with open data there.
Problems do exist where pairwise distances are easily obtainable but the driving geometry is not. Other than our taxi and MNIST examples, correlations among fMRI signals (track recurrent coactivation of neurons) are used to analyze brain functional connectivity ([van den Heuvel and Hulshoff Pol 2010](https://doi.org/10.1016/j.euroneuro.2010.03.008)). Another (remotely) related topic is travel time tomography, where the internal structure of media (e.g., organ, earth) is estimated by the travel time of waves (e.g., ultrasound, seismic). They also utilize pairwise measures to capture underlying structure.
In many cases, the data space is non-Euclidean, thus a Riemannian metric is needed to capture the intrinsic geometry. Yet, finding an appropriate one can be a challenging task but of great interest, as it could lead to more accurate similarity measures, better clustering algorithms, and improved recognition systems. We refer to literature in our main text and the general point 3 and references therein.
## 3. Existing methods learning Riemannian metric
*There is a large body of existing literature learning Riemannian metric, and there is no comparison to existing methods such as that in \[18\].*
We will include additional citations to acknowledge more recent literature utilizing latent variable models, including those based on pull-back ([Arvanitidis, Hauberg, and Schölkopf 2020](http://arxiv.org/abs/2008.00565)) and/or GP-LVM ([Jørgensen and Hauberg 2021](https://proceedings.mlr.press/v139/jorgensen21a.html)).
Meanwhile, stemming from distance metric learning, our work focuses on connecting the similarity measures to the Riemannian metric and its higher order derivatives (as in our eq. (3.1) and (3.5)). Admittedly there are plenty of rooms for further development including benchmarking to comparable methods, which cannot be exhausted in a single paper.
Perrault-Joncas et al. (2013)’s method ( \[18\] ) targets a more limited scenario in that it starts with pairwise extrinsic Euclidean distances without noise, while our method handles general distance metric with noise. Their method can thus not be applied to most of the numerical demonstrations in our work except for the MNIST digit. There is no code implementation of their work available which makes it difficult for comparisons.
## 4. Lack of open discussion on limitations
We do recognize several limitations and discuss those in section S1. We appreciate the reviewers’ interest and a deep dive into our Supplement. The limitation section is put here because of the page limit for the main text. We will add a more apparent remark in the main text linking to them.
## 5. What are we estimating?
*Can we always find a Riemannian metric compatible with the similarity measure, so that we are actually “estimating” some well-defined quantities, especially for binary measures?*
It is not always guaranteed that a (distance) metric space is a Riemannian manifold, let along when the measurements are binary. Analogously, in order for an underlying Riemannian metric to exist, the similarity measure needs to be induced by the geodesic distance on some Riemannian manifold. Thus our method is based on the manifold assumption (Bengio et al. 2013) and that the observed similarity measures are induced by some Riemannian metric, for which we are estimating. We are not aware of a certain answer for the binary situation. However, an abstract metric space can be approximated by a Riemannian manifold (theorem 1 of \[7\]). A later paper ([Fefferman, et al. 2020](https://doi.org/10.1137/19M126829X)) also discussed a manifold reconstruction problem similar to our additive error case (eq. (3.2)).
## 6. Positivity
*The estimated metric tensors might fail to be positive-definite and model (3.2) allows negative distance.*
Due to length limit of a single comment, see the Positivity section of our response to reviewers E2so and uGdp (identical). | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models | Accept (poster) | Summary: This paper presents Uni-ControlNet, a model that aims to enhance Text-to-Image diffusion techniques by allowing the concurrent use of multiple local and global controls. It only requires two additional adapters, regardless of the number of controls used, and circumvents the necessity of training from scratch. The authors assert that Uni-ControlNet performs favorably in terms of controllability, generation quality, and composability.
Strengths: - The proposed method allows for the simultaneous utilization of different local and global controls within one model, making it flexible and composable.
- It eliminates the need for training from scratch, reducing costs and making it suitable for real-world deployment.
Weaknesses: - Limited Novelty and Contribution: The paper's primary contributions are its condition injection strategy and training approach. However, the condition injection strategy appears to be derived from SPADE, and the training strategy seems to be based primarily on empirical evidence, without providing much novel insight or theoretical explanation.
- Insufficient Detail in Discussion: The description of the training strategy and the inference process, stated as major contributions, are not sufficiently clear. It remains unclear how the authors handle other conditions when only one condition is being utilized. It seems problematic to set the local conditions' values to zero with the intent of rendering them empty. Moreover, it seems that Uni-ControlNet cannot handle multiple conditions of the same type, as suggested in Figure 2 of the Supplementary Materials.
- Incomplete Comparisons in Experiments: The experimental comparison does not seem comprehensive. For instance, it's known that Stable Diffusion 2.1 unclip can also accept CLIP image embeddings as inputs, like global condition in this paper. Additionally, ControlNet can accommodate multiple conditions. Furthermore, the quantitative results do not appear to be particularly impressive - with the method achieving the best result in only 4 out of 8 instances in Table 2, and only 2 out of 8 in Table 1. Same for CLIP score in Supplementary Materials.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Have you considered introducing a metric that specifically evaluates the controllability of Uni-ControlNet? FID primarily assesses image quality, but it doesn't provide a measure for controllability.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thanks for your valuable comments!**
**Q1. Novelty and Contribution.**
**Answer**: Thanks for your comments! Please refer to the common Q1.
**Q2: Training strategy and inference process.**
**Answer**: As mentioned in line 149 of our main paper, we concatenate all the local conditions along the channel dimension to enable simultaneous use of different conditions. During training, as stated in line 178-180 in the main paper, we adopt a predefined probability to randomly drop each condition, along with an additional probability to deliberately keep or drop all conditions. For the dropped conditions, we set the value of the corresponding input channels to 0. This simple strategy has been shown to be very effective, because the feature extractor in the local adapter can learn to understand the user intention by the supervision loss during training.
**Q3: How to achieve composite control of 2 conditions with the same type.**
**Answer**: Really good point! In real-world applications, user can actually composite two/multiple conditions of the same type easily before feeding them to the model, e.g., draw the sketch of multiple objects in one canvas. However, considering it as an interesting research point, we try one simple strategy called "Uni-Channels". Specifically, we augment the input by adding three extra condition channels. For instance, if the original inputs had 21 channels (3 for each condition, totaling 7 local conditions), with Uni-Channels, we now have 21 + 3 channels for the inputs.
During training, we feed the Uni-Channels with randomly selected types of conditions of the input natural images. We observe that, as the shared extra condition channels for different condition types, Uni-Channels can perform well for two-condition composition of the same condition type. The visualization results are depicted in Figure 5 of the rebuttal PDF.
**Q4: Comparison with Stable Diffusion 2.1.**
**Answer**: Thanks for your suggestions! We compare our method with 2 Stable Diffusion models, Stable Diffusion 2 - depth ("SD2-depth") and Stable Diffusion 2 - unclip ("SD2-unclip") which support the inputs of depth map and reference image respectively. The visualization results are shown in Figure 6 in the rebuttal PDF. Additionally, we provide the quantitative results below:
| FID | Depth | Content |
|:---|:---:|:---:|
| SD2-depth | **17.76** | / |
| SD2-unclip | / | 24.12 |
| Ours | 21.20| **23.98** |
| CLIP score | Depth | Content |
|:---|:---:|:---:|
| SD2-depth | 0.2516 | / |
| SD2-unclip | / | **0.2497** |
| Ours | **0.2561** | 0.2402 |
It is important to note that for SD2-depth and SD2-unclip, the whole model is fine-tuned to learn the depth map or the reference images instead of only fine-tuning adapters, which is the key factor contributing to their great performance. Additionally, when compared to other controllable diffusion models like T2I-Adapter and ControlNet, SD2-depth and SD2-unclip outperform them, as demonstrated in Table 2 of the main paper and Table 1 of the supplementary material.
**Q5: Comparison with Multi-ControlNet.**
**Answer**: Please refer to the common Q2.
**Q6: Quantitative results are not particularly impressive?**
**Answer**: Thanks for your valuable comments, but we respectively disagree with this point. Regarding the quantitative results of the comparison with other methods presented in Table 2 in the main paper, our approach achieves the best performance in 6 out of 8 metrics (or 4 out of 6, considering that T2I-Adapter does not consider MLSD and HED) even with one unified single model. In the ablation study, it is important to note that Training-S2 is also our method. Training-S2 involves additional joint fine-tuning after our separate fine-tuning. Therefore, in Table 3, we achieve the best results in 7 out of 8 metrics. The CLIP score provided in the supplementary file also reflects our great performance.
Furthermore, our method demonstrates good results in the user study in the supplementary material, which provides reliable and straightforward indications of our superior performance from the user perception perspective.
**Q7:Evaluation of the controllability.**
**Answer**: Really great question! How to evaluation of the controllability is a common and important problem in controllable diffusion models. And we believe user study is the most accurate way to evaluate the controllability from the user perception view, and we already include this metric during the user study (provided in supplementary material). However, following your suggestion, we also try some automatic controllability evaluation metrics. Please refer to the common Q3.
---
Rebuttal 2:
Title: Help check if questions are well addressed.
Comment: Dear Reviewer MFNW,
We would like to express our appreciation for your efforts and suggestions! Could you please spare some time to check the response and see if your concerns are well addressed? We are very delighted to discuss with you and address any questions you might still have.
---
Rebuttal Comment 2.1:
Comment: Thanks the authors for the insightful explanation! I think most of my concerns are addressed. Thus I would like to raise my rating to "weak accept".
---
Reply to Comment 2.1.1:
Title: Thanks for your comments!
Comment: Dear Reviewer MFNW,
We are glad that your concerns have been well addressed. Really appreciate your prompt response and valuable feedback! | Summary: This paper proposes Uni-ControlNet that leverages lightweight local and global adapters to enable precise controls over pre-trained T2I diffusion models.
Strengths: 1. This paper is well written and organized.
2. The idea of local/global adapter to achieve all-in-one control is reasonable and interesting.
3. The results seem good.
Weaknesses: Since this paper is easy to follow and self-consistent, I have only a few minor questions:
1. Please compare the training costs of Uni-ControlNet with those of other methods (T2IAdapter, ControlNet).
2. I notice that the different conditions are concatenated as inputs to the adapter. If we want to add other control conditions, does the adapter need to be retrained?
Overall, although this work has some limitations, I think it meets the bar of NeurIPS.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Comparing the training cost of Uni-ControlNet with other methods.**
**Answer**: Great suggestions! Since the scale of the training set and training epochs varies across different methods, we present the time cost of a single training step as a measure. The reported result represents the average time cost across different conditions for each method:
| | ControlNet | T2I-Adapter | Ours |
|:---|:---:|:---:|:---:|
| Seconds | 5.01 | **3.73** | 5.16 |
The results show that both ControlNet and Our model have similar training costs, with an average of around 5 seconds per training step. In contrast, the T2I-Adapter exhibits a lower training cost, which can be attributed to the lightweight nature of a single adapter within the T2I-Adapter model.
It is important to note that, regardless of the number (N) of conditions, we only need to train a unified single model. However, for ControlNet and T2I-Adapter, their training cost will increase linearly with N, as they require training a dedicated model for each specific condition.
**Q2: Extending a trained Uni-ControlNet to newly added Conditions.**
**Answer**: Super insightful question! To extend a trained Uni-ControlNet to support new conditions, we conducted an experiment in two steps for comparison & analysis purpose. Firstly, we train a local adapter specific to N conditions. Next, we introduce a new type of condition and extend the trained adapter to (N+1) conditions. The adaptation process involved modifying the input channel of the Uni-ControlNet's first convolutional layer within the feature extractor. Then, we try to retrain the local adapter with 4 different retraining strategies (R1-4) to accommodate the new conditions:
1. Retraining the entire feature extractor (R1),
2. Only retraining the pre-feature extractor, which is the part that projects the condition from resolution 512 to 64 (R2),
3. Only retraining the first convolutional layer in the feature extractor (R3),
4. Without retraining, i.e., random initialization of the first convolutional layer in the feature extractor (R4).
During the retraining process, we ensure that the weights of the copied encoder in the local adapter remain fixed. We utilize a training dataset of 300k samples for the retraining. We show the extension from [MLSD + HED + Sketch + OpenPose + Depth + Seg] to [MLSD + HED + Sketch + OpenPose + Depth + Seg + **Canny**]. The results of this extension process are presented in Figure 2 of the rebuttal PDF. We surprisingly observe that retraining solely the first convolutional layer in the feature extractor can already adequately enables the Uni-ControlNet to handle the newly added conditions. This is a great feature that enables our model to quickly expand to new conditions!
---
Rebuttal 2:
Title: Is there any further questions or concerns?
Comment: Dear Reviewer n8So,
We sincerely appreciate your efforts and positive feedback! Could you please help find time to review the response and see if your questions are well answered. We are very happy to discuss with you about any remaining questions you might still have.
---
Rebuttal Comment 2.1:
Comment: Thanks for the authors' rebuttal. I think my concerns have been addressed. I would like to keep the original rating (weak accept).
---
Reply to Comment 2.1.1:
Title: Thanks for your comments!
Comment: Dear Reviewer n8So, we really appreciate your valuable comments, prompt response, and recognition of our paper. | Summary: This paper proposed a method to do controlleble t2i generation from a pretrained diffusion model. The main contribution is that they only have two adapters one local (e.g., edge map, keypoint etc) and one global (e.g., image). For local, they use the controlnet, but concatenate conditions as input. For global, they extract image feature and treat it as text tokens.
Strengths: The idea is simple and writing is clear
Weaknesses: There are several weakness for this paper:
1, the technique novelty is incremental. Although, they tried something such as SPADE like injecting information for local branch and combine image and text tokens for global etc, but they are very straightforward.
2, missing baseline GLIGEN [Li et al, CVPR, 2023] which also supports conditions studied in this paper.
3, one more weakness for this paper is missing evaluation for condition correspondence. They only reported FID as a metric, which only reflects image quality. But they should also study how well the generated images are corresponded with input conditions. For example, in GLIGEN, they use mask-rcnn to detect keypoints from generated images and compare with input keypoint, so that we can know how well the model following the input. I understand that for certain conditions such as edge map, maybe it is hard to evaluate, but as least for keypoint, semantic map, depth map, it is easy to come up with some metrics.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: NA
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: See weakness
====================================================================================
They addressed my main concern which is they only evaluated image quality, but not controllability.
I strongly encourage them to add results table in the rebuttal to their paper, thus will be served as a baseline for future controllable image generation work.
Based on this point, I am willing to raise my score despite that I feel novelty is a bit weak
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thanks for your valuable comments**
**Q1. The technique novelty.**
**Answer**: Thanks for your suggestions! Please refer to the common Q1.
**Q2: The comparison with the GLIGEN.**
**Answer**: Thanks for your suggestions! GLIGEN is an excellent paper that introduces a model conditioned on bounding boxes with caption groundings. It also explores alternative forms of grounding, such as edge maps and pose information. While our original paper does not consider GLIGEN as a baseline due to its primary focus on bounding boxes condition and lack of support for composable control, comparing our method to GLIGEN can enhance the comprehensiveness of our study and show performance differences.
To facilitate its incorporation into the final version, we directly utilize the samples presented in Figure 5 of the main paper. We showcase the qualitative comparison results in Figure 4 of the rebuttal PDF. From the results, we can easily observe some shortcomings in GLIGEN's output. For example, the detail of the deer's face is not very good, the depiction of the forest lacks realism, and the sky is missing in the case of segmentation map condition.
We further conduct the quantitative comparison in terms of FID and CLIP Score:
| FID | Canny | MLSD | HED | Sketch | Pose |Depth | Segmentation | Content |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| GLIGEN | 24.74 | / | 28.57 | / | **24.57** | 21.46 | 27.39 | 25.12 |
| Ours | **17.79** | **26.18** | **17.86** | **20.11** | 26.61 | **21.20** | **23.40** | **23.98** |
| CLIP score | Canny | MLSD | HED | Sketch | Pose |Depth | Segmentation | Content |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| GLIGEN | 0.2493 | / | 0.2403 | / | **0.2534** | 0.2526 | 0.2456 | 0.2401 |
| Ours | **0.2539** | **0.2485** | **0.2556** | **0.2542** | 0.2514 | **0.2561** | **0.2540** | **0.2402** |
To evaluate the perception quality, we further conduct a user study, following the settings in Section 3 - User Study in the supplementary material.
| | Generation Quality | Match with Text | Match with Condition |
|:---|:---:|:---:|:---:|
| GLIGEN | 30.3% (121) | 44.2% (168) | 29.5% (118) |
| Ours | **69.7% (279)** | **55.8% (212)** | **70.5% (282)** |
We can find that our method outperforms GLIGEN in nearly all evaluations.
**Q3: Evaluation of the controllability.**
**Answer**: Really great question! How to evaluation of the controllability is a common and important problem in controllable diffusion models. And we believe user study is the most accurate way to evaluate the controllability from the user perception view, and we already include this metric during the user study (provided in supplementary material). However, following your suggestion, we also try some automatic controllability evaluation metrics. Please refer to the common Q3.
---
Rebuttal 2:
Title: Help check the rebuttal and happy to discuss more.
Comment: Dear Reviewer WHfG,
We are very grateful for your efforts and suggestions! We have addressed the concerns in the above rebuttal. Could you please help take a look and see whether your concerns are well addressed? We are very happy to discuss with you and provide further clarification for any new questions. Grateful for your effort!
---
Rebuttal Comment 2.1:
Title: Thanks for the updated comments
Comment: Dear Reviewer WHfG,
Thanks for your prompt response and raising the score to accept. We will follow your suggestion and add all the results shown in the rebuttal into the final version. | Summary: This paper proposes Uni-ControlNet for the simultaneous utilization of various local controls and global controls within a single model in a flexible and composable manner. This is achieved by fine-tuning of two additional adapters on top of pre-trained text-to-image diffusion models, eliminating the significant cost of training from scratch.
Strengths: This paper propose a new framework that leverages lightweight adapters to enable precise controls in a single model.
Weaknesses: 1. The training sets for the different models in table 2 are not the same. It raises the question of fairness in comparisons between the models.
2. The author should compare with the simple baseline Multi-controlnet: https://huggingface.co/blog/controlnet.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. It is unclear from the information provided whether the LAION dataset underwent any filtering, such as using OpenPose. Not all the images includes human to detect.
2. The training time and GPU usage are not provided in the given information. These details are important for understanding the computational requirements and resource usage of the models.
3. Is the last condition in Figure 5 image condition(global condition)? It is unclear how ControlNet implements this condition. The paper should provide a clear explanation of the condition and how it is implemented in ControlNet to ensure transparency and understanding of the model.
4. During training, are all seven conditions of each image simultaneously inputted to the network for training, or are some conditions selectively set to empty?
5. The evaluation of clip scores is not discussed, which is important for text-driven generation.
6. why Feature Denormalization is considered superior to using SPADE (Injection-S1) or ControlNet (Injection-S2) directly? Could you give some explanation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thanks for your valuable comments!**
**Q1: Fairness in comparisons.**
**Answer**: It's worth noting that the training set used for the compared models is not publicly available (we already made the request by email, but no response or data not sharable). Additionally, the specific training settings for their released models are not fully disclosed as well, which makes it hard to guarantee absolute fairness in the comparison. To minimize this gap, we conducted experiments to ablate the model design between our method and other models in Section 4.3 in the main paper. Through these control-experiments, we were able to demonstrate the effectiveness of our method.
**Q2: Comparison with Multi-ControlNet.**
**Answer**: Thanks for your suggestions! Please refer to the common Q2.
**Q3: The construction of the training set.**
**Answer**: We did not employ any filtering method for the LAION dataset, as stated in line 186 in the main paper, and used a random subset of the dataset for training. As also mentioned in line 178-180 in the main paper, we adopted a dropout strategy for conditions during training.
Regarding the pose condition, it is true that not all images in the dataset include humans. To ensure that the pose condition is fully trained, we opted to not drop the pose condition during training. This means that we always keep the pose condition if it is available, allowing the model to learn the full range of pose information. In the future, if there is only a very small portion of data existing for some special conditions, advanced resampling strategies may be needed to guarantee balance and enough training.
**Q4: Training time and GPU usage.**
**Answer**: Good question! We train our model by using 64 NVIDIA Tesla 32G-V100 GPUs. We trained on 10 million data for 1 epoch, with a batch size of 192 for the local adapter and 256 for the global adapter. It took approximately 3 days to train the local adapter, and around 1.5 days to train the global adapter. However, as illustrated in Section 4.3 in our main paper, training on 1 million data for 1 epoch is sufficient to achieve great results.
**Q5: How does ControlNet implement the content condition?**
**Answer**: Yes, the last condition in Figure 5 of the main paper represents the image condition. ControlNet v1.1 achieves control through content, using an image-to-image method that differs from our embedding-to-image approach. To implement this, they first shuffle the content by remapping the image based on a random flow and then use the shuffled content to control the generation process.
**Q6: The keep and drop of conditions during training.**
**Answer**: As mentioned in line 149 in our main paper, we concatenate all the local conditions along the channel dimension to enable simultaneous use of different conditions. During training, as stated in line 178-180 in the main paper, we adopt a predefined probability to randomly drop each condition, along with an additional probability to deliberately keep or drop all conditions. For the dropped conditions, we set the value of the corresponding input channels to 0.
**Q7: Evaluation on CLIP score.**
**Answer**: Due to space constraints in the main paper, we have provided the evaluation results in terms of CLIP score in Table 1 and Table 2 of the supplementary material. In addition to the FID and CLIP score, we have also conducted a user study to evaluate the results, which can be found in Figure 3 and Figure 4 of the supplementary material. These evaluation measures provide a comprehensive assessment of the performance of our model.
**Q8: Why is FDN better than using SPADE or ControlNet directly?**
**Answer**: It is a great question! Directly using SPADE to inject conditions involves resizing the conditions to the corresponding resolutions using interpolation which we call Injection-S1 in our main paper. This direct interpolation significantly destroys condition information, leading to poor alignment with the conditions, as illustrated in Figure 7, Table 3 of the main paper. Additionally, when directly resizing the conditions and sending them to the model, the resized conditions cannot align well with the latent space where they were injected. In contrast, our FDN employs a multi-scale injection strategy that provides condition information at different levels, resulting in richer condition information. Furthermore, our feature extractor projects the conditions to the corresponding latent spaces of different layers, which allows for better alignment between the conditions and noise features.
When using ControlNet or T2I-Adapter directly, which only provide condition information in the input layer of the adapter (which we refer to as Injection-S2 in our main paper), the model may lose some information of the conditions in deeper layers. This is illustrated in Figure 7 of the main paper, where Injection-S2 cannot handle composite controls effectively, and the generated samples do not align well with the combined conditions, or the conditions are not well merged.
In contrast, our FDN method provides multi-scale injection, which allows for better preservation of condition information and alleviates the condition forgetting issue in deeper layers. We demonstrate that this design results in superior performance in handling composite controls within a unified framework.
---
Rebuttal 2:
Title: Is there any more questions or concerns?
Comment: Dear Reviewer RHVi,
Really thank you for your efforts and suggestions! Could you please help check our response and see whether your questions are well answered? We are very pleased to engage in a discussion with you and provide additional clarification for any new questions.
---
Rebuttal Comment 2.1:
Comment: Thanks for the author's response. All my concerns have been well addressed. I would like to raise the score to weak accept.
---
Reply to Comment 2.1.1:
Title: Thanks for the recognition of our work
Comment: Dear Reviewer RHVi, we are glad that all your concerns have been addressed. We really appreciate your prompt response and recognition of our work. | Rebuttal 1:
Rebuttal: **We would like to thank all the reviewers for the valuable feedback!** Here we first address some common questions.
**Q1. Re-clarification of Contribution and Novelty.**
**Answer**: We would like to emphasize that our primary contribution is proposing a new unified controllable diffusion model that can not only handle different conditions within **one single model** but also supports composable control, as illustrated in Table 1 in the main paper. By contrast, existing methods fail to achieve this unified framework within one single model. Besides, even for those methods that support composite control, their composability is much worse than ours. Through extensive qualitative and quantitative evaluations, our method demonstrates even overall better results with a unified single model thanks to our newly proposed designs.
From the technique side, we have explored some existing techniques and found they are inadequate for achieving our goal of a unified framework. For example, the injection strategy employed by ControlNet and T2I-Adapter is insufficient, as it will suffer from information loss. Similarly, directly using SPADE will also result in poor performance, as resizing condition features through direct interpolation to low resolutions causes significant loss of condition information. Therefore, we develop a new multi-scale injection strategy through FDN to achieve better alignment between condition features and latent noise features across different layers of the local adapter, which is not explored in previous methods.
**Q2: Comparison with Multi-ControlNet.**
**Answer**: Great suggestion and thanks for the reminder! We want to explain that we missed the comparison with Multi-Controlnet because it is not included in the original ControlNet paper. However, we acknowledge the importance of comparing our method with Multi-ControlNet. To facilitate its incorporation into the final version, we utilize the samples presented in Figure 6 of the main paper and present the comparison results in Figure 3 of the rebuttal PDF. Our observations are that Multi-ControlNet has much worse composability. For example, similar to T2I-Adapter, it misses the podium and the car in the first 2 samples in Figure 3 of the rebuttal PDF respectively. In addition, the composite generation of a local condition and a global condition is also not that good.
As Multi-ControlNet is designed for composite control, we conducted a similar user study evaluation following the settings in Section 3 - User Study of the supplementary material. The results are presented below:
| | Generation Quality | Match with Text | Match with Condition |
|:---|:---:|:---:|:---:|
| Multi-ControlNet | 27.0% (108) | 39.5% (79) | 10.8% (43) |
| Ours | **73.0% (292)** | **60.5% (121)** | **89.2% (357)** |
It can be seen that our method demonstrates a clear advantage compared to the Multi-ControlNet on all the three metrics in the user study.
**Q3:Evaluation of the controllability.**
**Answer**: Really great question! Evaluating the controllability of different methods is a crucial aspect of the controllable diffusion models. We firmly believe that human perception provides the most effective and accurate measure in this regard, especially for the multi-condition scenarios. Therefore, we conducted a user study that encompassed both single condition and multi-condition controls, allowing participants to select the results they deemed to best match the given conditions. The results of our ablation study can be found in the supplementary material, specifically in Figure 3 and Figure 4.
While human perception serves as the most important evaluation metric, we also recognize the importance of utilizing other quantitative metrics as the auxiliary metrics to assess the controllability. Following the reviewers' suggestion, we employed the following metrics for single-condition generation:
1. SSIM (Structural Similarity) for Canny, HED, MLSD, and sketch conditions.
2. mAP (mean Average Precision) based on OKS (Object Keypoint Similarity) for pose condition.
3. MSE (Mean Squared Error) for Depth map.
4. mIoU (Mean Intersection over Union) for segmentation map.
5. CLIP score for content condition.
To calculate these metrics, we compare the extracted conditions from the natural image (the ground truth) and the corresponding generated image. We follow the settings in Section 4.2 Comparison with Existing Methods - Quantitative Comparison of the main paper and here are the results:
| | Canny-SSIM | MLSD-SSIM | HED-SSIM | Sketch-SSIM | Pose-mAP |Depth-MSE | Segmentation-mIoU | Content-CLIP score |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| ControlNet | 0.4828 | **0.7455** | 0.4719 | 0.3657 | 0.4359 | **87.57** | **0.4431** | 0.6765 |
| GLIGEN | 0.4226 | / | 0.4015 | / | 0.1677 | 88.22 | 0.2557 | 0.7458 |
| T2I-Adapter | 0.4422 | / | / | 0.5148 | **0.5283** | 89.82 | 0.2406 | 0.7078 |
| Ours | **0.4911** | 0.6773 | **0.5197** | **0.5923** | 0.2164 | 91.05 | 0.3160 | **0.7753** |
Our method outperforms other baseline methods in 4 out of 8 evaluation metrics. Notably, ControlNet achieves the best performance in 3 out of 8 metrics, while T2I-Adapter only excels in 1 out of 8 metrics. However, it should be noted that such methods employ different models for different conditions, allowing each model to be well-trained for its corresponding condition. In contrast, we only use a single model and achieved even overall superior results.
By utilizing both human perception (user study) and quantitative metrics, we aim to provide a comprehensive evaluation of the controllability achieved by our method and enable a thorough understanding of its performance.
Pdf: /pdf/12614ea1e9f3e6a859898cf698d7ff317c7aa7b1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors proposed Uni-ControlNet, a novel approach that allows for the simultaneous utilization of different local controls and global controls. It uses two additional adapters (local and global) and injects their outputs into the frozen pretrained diffusion models, and only the parameters in adapters need training.
Through both quantitative and qualitative comparisons, Uni-ControlNet demonstrates its superiority over existing methods in terms of controllability, generation quality, and composability.
Strengths: 1. By training with multiple conditions simultaneously, Uni-controlnet is able to perform various kinds of control with only one model.
2. Uni-controlnet only adds 2 adapters, which is efficient in both training and inference.
3. By concatenating clip image embedding with text embedding (condition), the method can control the style of generated image.
Weaknesses: 1. The clarification of the dataset construction is unclear. For example, is the skeleton/sketches generated by model, or manually collected? If it's automatically generated by models, the performance will be bounded by the accuracy of those models, and may suffer from distribution gaps if the control map are painted by human during inference. Otherwise, it's very hard to anotate such a complex dataset.
2. Insufficient ablation study. As mentioned in L14, "Uni-ControlNet only necessitates a constant number (i.e., 2) of adapters, regardless of the number of local or global controls used." Is it because of the structure design? If so, a normal ControlNet with multiple controls trained together should be compared with.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. What's the inference time cost of Uni-controlNet?
2. How can we extend a trained Uni-ControlNet to other types of control? Do we need to train the adapters with all types of control together from scratch?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: see weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thanks for your valuable comments!**
**Q1: How to get the training data of sketches?**
**Answer**: Great question! Indeed, annotating a sketch dataset can be challenging. In our experiment, we initially obtain the HED boundary detection of an image and subsequently utilize a sketch simplification method to generate the sketches for the training samples. Although there are distribution gaps between the hand-drawn sketches and the model-generated sketches, we have observed that our model can handle hand-drawn sketches pretty well (Figure 1 of the rebuttal PDF). Please note that, how to further boost the generation quality by bridging the gap between hand-drawn sketches and model-generated sketches is beyond the scope of this paper.
**Q2: Ablation study on structure design.**
**Answer**: In our ablation study, we extensively investigated the impact of structure design. One aspect we explored is, as you mentioned, training a ControlNet with multiple controls simultaneously, referred to as Injection-S2 in our main paper. However, this design yielded relatively poor results, as illustrated in Figure 7 and Table 3 in the main paper, as well as Table 2 in the supplementary material.
Furthermore, we conducted a comparison with another structure design called "Injection-S1", which directly utilizes SPADE. It can be seen that, Injection-S1 produced inferior results compared to both our proposed method and Injection-S2.
**Q3: Inference cost.**
**Answer**: We conducted a test by evaluating 100 samples for each condition and calculated the average inference time per sample:
| | ControlNet | T2I-Adapter | Ours |
|:---|:---:|:---:|:---:|
| Seconds | 9.02 | **6.21** | 9.16 |
The results indicate that the inference cost of ControlNet and Our model is approximately the same, around 9 seconds per sample on average. On the other hand, the T2I-Adapter demonstrates a faster inference time, achieving 6 seconds per sample on average. This can be attributed to the lightweight nature of a single adapter in the T2I-Adapter model.
**Q4: Extending a trained Uni-ControlNet to newly added Conditions.**
**Answer**: Super insightful question! To extend a trained Uni-ControlNet to support new conditions, we conducted an experiment in two steps for comparison & analysis purpose. Firstly, we train a local adapter specific to N conditions. Next, we introduce a new type of condition and extend the trained adapter to (N+1) conditions. The adaptation process involved modifying the input channel of the Uni-ControlNet's first convolutional layer within the feature extractor. Then, we try to retrain the local adapter with 4 different retraining strategies (R1-4) to accommodate the new conditions:
1. Retraining the entire feature extractor (R1),
2. Only retraining the pre-feature extractor, which is the part that projects the condition from resolution 512 to 64 (R2),
3. Only retraining the first convolutional layer in the feature extractor (R3),
4. Without retraining, i.e., random initialization of the first convolutional layer in the feature extractor (R4).
During the retraining process, we ensure that the weights of the copied encoder in the local adapter remain fixed. We utilize a training dataset of 300k samples for the retraining. We show the extension from [MLSD + HED + Sketch + OpenPose + Depth + Seg] to [MLSD + HED + Sketch + OpenPose + Depth + Seg + **Canny**]. The results of this extension process are presented in Figure 2 of the rebuttal PDF. We surprisingly observe that retraining solely the first convolutional layer in the feature extractor can already adequately enables the Uni-ControlNet to handle the newly added conditions. This is a great feature that enables our model to quickly expand to new conditions!
---
Rebuttal 2:
Title: Help check whether questions are well answered.
Comment: Dear Reviewer WFYK,
We would like to thank you again for your efforts and positive feedback! Could you please help find time to take a look at the response and check whether your questions are well answered. We are very happy to answer any questions you might still have.
---
Rebuttal 3:
Comment: Thanks for the authors' rebuttal. I think my concerns have been addressed.
---
Rebuttal Comment 3.1:
Title: Thanks for your comments!
Comment: Dear Reviewer WFYK,
We are glad that your concerns have been addressed! And we sincerely thank you for your valuable feedback and recognition of our paper! | null | null | null | null | null | null |
Learning Large-Scale MTP$_2$ Gaussian Graphical Models via Bridge-Block Decomposition | Accept (poster) | Summary: The authors show that, in Gaussian MTP2 distributions, bridges in the graph structure have a closed form solution. They use this observation to suggest practical solutions that can be applied whenever such models are being fit.
Strengths: A nice, more or less self-contained theoretical work that unifies and extends some existing lines of research into MTP2 Gaussian distributions.
Weaknesses: Some room for improvement in terms of presentation and some minor typos. It's harder for me to judge practical utility. While it is true that MTP2 distributions have some practical utility (I've used them myself), it isn't clear that many real datasets meet that requirement (even approximately).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - "calculating the thresholded graph, bridges, and clusters, is negligible" Sure, but I feel that a precise statement and citation is needed here. In particular, you still have to compute the sample covariance matrix for this.
- I actually had to read it a couple times to understand the exact procedure that was being proposed. I suggest adding some clarifying details to help the reader.
- What is the primary limitation that makes this result only apply to MTP2 distributions? Is it possible that something similar could be true more generally? I didn't really get a sense of why MTP2 was required in the main text.
- Do you need Gaussian distributions here? Can something be done for more general MTP2 distributions, e.g., those that can still be represented as pairwise graphical models?
Misc. typos (only a few listed):
- "which severs as a common assumption"
- "make possibilities of solving..." (revise)
- "As thresholded graph plays"
- "garnered considerable attentions"
- "Bridge is one of the important concepts"
- "closed-form solutions in literature heavily"
- "seems inapplicable for dense graph"
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Some discussion of limitations, though it would have been nice to see a more clearly identified set of problems yet to be solved.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Answer to Questions Part 1
>"calculating the thresholded graph, bridges, and clusters, is negligible" Sure, but I feel that a precise statement and citation is needed here. In particular, you still have to compute the sample covariance matrix for this.
__Reply:__ We are grateful for your valuable suggestion. We would like to elaborate on this matter and provide a more precise statement. For example, given sample covariance matrix, the computational cost associated with calculating the thresholded graph, bridges, and clusters is relatively low [1,2].
For a detailed discussion on why the computational cost associated with determining the thresholded graph, bridges, and clusters is much lower compared to solving sub-problems, please refer to Part 3 of our global response.
__References__:
[1] Jens M Schmidt. A simple test on 2-vertex-and 2-edge-connectivity. Information Processing Letters, 113(7):241–244, 2013.
[2] R Endre Tarjan. A note on finding the bridges of a graph. Information Processing Letters, 2(6):160–161, 1974.
## Answer to Questions Part 2
> I actually had to read it a couple times to understand the exact procedure that was being proposed. I suggest adding some clarifying details to help the reader.
__Reply:__ We appreciate your suggestion that the procedure could benefit from additional clarification. Please find our detailed answer in the part 3 of the global response.
## Answer to Questions Part 3
>What is the primary limitation that makes this result only apply to MTP2 distributions? Is it possible that something similar could be true more generally? I didn't really get a sense of why MTP2 was required in the main text.
__Reply:__ We appreciate the reviewer for raising this interesting question. In part 1 of our global response, we highlight the importance of MTP2 in ensuring the validity of our proposed Theorem 3.3. This also explains why our results only apply to MTP2.
In response to whether our methodology can hold more broadly, in part 2 of our global response, we detail the circumstances under which our theoretical framework could be applied to the graphical lasso as well. Please check our detailed answer.
## Answer to Questions Part 4
> Do you need Gaussian distributions here? Can something be done for more general MTP2 distributions, e.g., those that can still be represented as pairwise graphical models?
__Reply:__ Thank you for your insightful comment. While our paper primarily focuses on learning $\mathrm{MTP}_2$ Gaussian graphical models, our method is not explicitly tied to Gaussian distributions. This is due to the deterministic nature of our optimization problem, as presented below:
$ \min_{\boldsymbol{\Theta}\succ \mathbf 0} - \log \det ( \boldsymbol{\Theta}) + \langle \boldsymbol{\Theta}, \mathbf{S} \rangle + \sum_{ij} \Lambda_{ij} | \Theta_{ij}|.$
Extending our method to non-Gaussian cases may be straightforward. For instance, in the context of elliptical distributions, we could substitute the sample correlation matrix with the Kendall's tau correlation matrix, leading to positive partial correlation graphs [1]. It's worth highlighting that the MTP2 property is somewhat restrictive, and as demonstrated in [2], an elliptical distribution that is MTP2 across all dimensions is essentially Gaussian. Lastly, we would like to clarify that the sole assumption in our paper is that $S_{ij} < \sqrt{S_{ii} S_{jj}}$ for any $i \neq j$.
We will include a more detailed discussion on this aspect in our revised manuscript.
__References__:
[1] R. Agrawal, U. Roy, and C. Uhler. "Covariance matrix estimation under total positivity for portfolio selection." Journal of Financial Econometrics, 20(2):367-389, 2022.
[2] D. Rossell, and P. Zwiernik. "Dependence in elliptical partial correlation graphs." Electronic Journal of Statistics, 15(2):4236-4263, 2021.
## Reply to Comments
> While it is true that MTP2 distributions have some practical utility (I've used them myself), it isn't clear that many real datasets meet that requirement (even approximately)
__Reply:__ Thank you for your thoughtful questions. From a practical perspective, MTP2 becomes a natural choice when the __variables are anticipated to display positive dependence__. Numerous real-world scenarios, particularly in sectors such as finance and social sciences, manifest this positive dependence.
To illuminate how data expressing positive correlation may approximate MTP2 properties, we have conducted supplementary experiments with the CROP Image Data set, which is used in our real-world experiment. The details are described in Part 4 of our global response. These experiments reveal that the CROP data set shows a form of positive correlation, thereby making it suitable for modeling with MTP2 graphical models. Please refer to our global response for our detailed replies.
## Additional Comments
__Reply__: Thank you for pointing out the typos. We will correct them in the revised version. | Summary: This paper studies a graphical lasso problem where the precision matrix is restricted to be symmetric M-matrix and the associated GMRF graph has a special structure. Specifically, the authors consider the situation when the graph allows bridge-block decomposition so that vertices can be partitioned into k parts by cutting k-1 "bridges". Under this situation, the authors show that the original problem can be decomposed into k subproblems, illustrated in theorems in section 3.2. Since the existing algorithm's complexity quickly increases as the dimensionality increases, decomposition into subproblems offers large computational benefits. Authors compare the performance of four different algorithms, using synthetic and real data, with and without leveraging the bridge-block decomposition and show that bridge-block decomposition provides huge computational benefits if the given graph has many edges that are “bridges”.
Strengths: Authors provide a closed form of the elements of precision matrix corresponding to bridges, and make a connection with existing literature when underlying GMRF graph is an acyclic graph. This method is useful when one wants to solve MTP2 constrained graphical lasso problem with a penalty parameter that allows block-bridge decomposition. This work is resembles the existing literature on graphical lasso (without MTP2 constraint), such as Witten et al (2011); Mazumder and Hastie (2012), where graphical lasso problem can be decomposed to the smaller subproblems when graph has many connected components, and adds a contribution in the context of MTP2 constrained graphical lasso when graph has many bridges.
Witten, D. M., Friedman, J. H., & Simon, N. (2011). New insights and faster computations for the graphical lasso. Journal of Computational and Graphical Statistics, 20(4), 892-900.
Mazumder, R., & Hastie, T. (2012). Exact covariance thresholding into connected components for large-scale graphical lasso. The Journal of Machine Learning Research, 13(1), 781-794.
Weaknesses: Simulation settings and real data examples have some discrepancy from usual (graphical) lasso settings and its motivation. The main goal of graphical lasso is to discover the underlying conditional independency structure from the multivariate Gaussian data with various choice of penalty parameter (aka lasso path diagram). However authors first fix the graph in their settings such as preferential attachment graph and stochastic block model and choose penalty parameter according to the graph. It is understandable in the simulation study setting to show the computational benefits with block-bridge decomposition, but fixing the graph in the real data is not convincing, and also it is not clearly stated why MTP2 graphical lasso is appropriate to this crop image data problem (is it reasonable to assume that crop image data comes from high-dimensional multivariate Gaussian distribution? what is the interpretation of the resulting graph? why MTP2 constraint is necessary / plausible in this problem?)
In practice, graphical lasso is ran under various settings of penalty parameters and chosen appropriately such as cross validation. The existing graphical lasso decomposition (Witten et al (2011); Mazumder and Hastie (2012)) are useful since graphical lasso with high penalty parameter often leads to a graph with many connected components. In similar spirit, I suggest authors to illustrate how often the graph that allows block bridge decomposition appears by the choice of penalty parameter.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - Does the main result, decomposition into smaller graphical lasso subproblems under block-bridge decomposition, holds without MTP2 constraint?
- If not, please clarify the the role of the MTP2 constraint in the main results; does MTP2 constraint is necessary for the assumption 2.1. to be hold?
- What happens if graph has many connected components (without bridges) in MTP2 constrained graphical lasso problem? Does similar decomposition holds?
- Section 4.1: Sample size $n$ is 10 times the dimension $p$? This setting completely violates the main motivation of graphical lasso where $\ell_1$ penalty has been proposed since MLE does not exist when $n<p$.
- Section 4.3: Shouldn't be data $y_i$ is 24000-dimensional with $i=1,\dots,46$ observations?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The main results are meaningful but simulation results and real data settings are less convincing. No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Answer to Questions Part 1
>Does the main result holds without MTP2 constraint?
__Reply:__ We thank the reviewer for raising this point. The MTP2 constraints are essential prerequisites for our main results.
## Answer to Questions Part 2
> If not, please clarify the the role of the MTP2 constraint in the main results.
__Reply:__ We appreciate the reviewer for raising this insightful question. Please refer to part 1 of global response for our detailed answer.
> Does MTP2 constraint is necessary for the assumption 2.1. to be hold?
__Reply:__ The MTP2 constraint is not a prerequisite for assumption 2.1. Assumption 2.1 primarily ensures the existence and uniqueness of the optimal solution.
## Answer to Questions Part 3
> What happens if graph has many connected components (without bridges) in MTP2 constrained graphical lasso problem? Does similar decomposition holds?
__Reply:__ We appreciate your thought-provoking question. Yes, our theorem can also find the exact solution like the existing graphical lasso decomposition. Here's a simple comparison:
|Graph Type|Description|
|:-|:-|
|__Single-component__ graph with bridges|Our method works, the __existing method doesn't__.|
|__Multi-component__ graph with bridges|Our method __works better__ than existing method.|
|__Multi-component__ graph without bridges|Our method has __equivalent__ effectiveness to the existing method.|
In theory, bridge-block decomposition refers to the components after removing all bridges. So, even in a graph with many components and no bridges, we can still find clusters. This makes bridge-block decomposition versatile, as it can deal with __both connected and disconnected__ sparse thresholded graphs.
## Answer to Questions Part 4
> Section 4.1: Sample size n is 10 times the dimension p? This setting completely violates the main motivation of graphical lasso.
__Reply:__ Thank you for your thoughtful feedback. We understand the graphical lasso is often used when $p>n$. However, __the aim of our experiments__ is not merely to uphold the statistical rationale of the MTP2 graphical lasso, but to __demonstrate how our proposed theorem can significantly accelerate the learning of high-dimensional sparse MTP2 graphs from an optimization perspective__. Hence, our experimental design is crafted to ensure successful graph structure recovery across all experiments.
In response to your suggestions, we've added new experiments (see Figure 4 in the attached PDF). Here, we set the sample size to $n=0.1p$, where $p$ is the dimension. These experiments show that our method still effectively speeds up convergence due to the sparsity of the underlying structure.
## Answer to Questions Part 5
> Section 4.3: Shouldn't be data yi is 24000-dimensional with i=1,...,46 observations?
__Reply:__ In the context of graphical models, every node corresponds to a data, while the edges represent the conditional dependencies between the data. As a result, our network comprises 24000 nodes. It is important to clarify that in this scenario, yi does not represent a feature, but rather a signal consisting of 46 observations. This yields a configuration where p=24000 and n=46.
## Reply to Comments Part 1
> Fixing the graph is not convincing, and also why MTP2 graphical lasso is appropriate to this crop image data problem (what is the interpretation of the resulting graph? why MTP2 constraint is necessary / plausible in this problem?)
__Reply:__ We appreciate your practical question. To alleviate your concerns, we have included more experiments in Part 4 of the global response to show that the MTP2 graphical Lasso is suitable for graph-based clustering with the CROP dataset.
The results show that the CROP data exhibit positive dependency suit for MTP2 assumption. To the estimated graph, edges represent positive conditional dependence between variables. In a clustering context, this often suggests that these interconnected nodes in the estimated graph tend to fall within the same cluster.
Given our observation that the data adheres to MTP2 properties, it would be advantageous to include MTP2 constraints __as prior information to improve the learning of graphical models__. The MTP2 structure also possesses excellent mathematical properties that confer __significant computational advantages__, enabling applying bridge-block decomposition to solve high-dimensional sparse graphs, which can not be solved via existing methods.
## Reply to Comments Part 2
> The existing graphical lasso decomposition are useful since graphical lasso with high penalty parameter often leads to a graph with many connected components. I suggest authors to illustrate how often the graph that allows block bridge decomposition appears by the choice of penalty parameter.
__Reply:__ Thank you for your useful comments on practical aspects. To respond, we've done more tests using the same steps as in our synthetic data experiments. We created a random SBM model with 2000 nodes. By adjusting $\lambda$, we checked how the decomposition methods perform under different thresholded graphs. The results are shown in the table below:
|$\lambda$|0|0.01|0.03|0.1|0.15|0.18|0.3|0.5|1|
|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Number of clusters by __existing decomposition__|1|1|1|1|1|__15__|__1681__|__1974__|__2000__|
|Number of clusters by __bridge-block decomposition__| 1|1|__9__|__402__|__406__|__411__|__1793__|__1978__|__2000__|
We found that:
1. Our method becomes effective when $\lambda\geq 0.03$, while the existing decomposition method is applicable when $\lambda\geq 0.18$.
2. Theoretically, our approach can handle both scenarios: when the graph has bridges or when it has multiple components. Conversely, the existing method can only cope with the latter case.
The results clearly illustrate that __our method exhibits a broader range of applicability__ compared to existing decomposition methods.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. I have some additional questions.
> Reply: In the context of graphical models, every node corresponds to a data, while the edges represent the conditional dependencies between the data. As a result, our network comprises 24000 nodes. It is important to clarify that in this scenario, yi does not represent a feature, but rather a signal consisting of 46 observations. This yields a configuration where p=24000 and n=46.
At the bottom of page 8 (why there are no line numbers?), the current manuscript reads "Our goal is to perform graph-based clustering for the indexed data ${y_1, . . . , y_{24000}}$ using MTP2 GGMs, where $y_i \in \mathbb{R}^{46}$." This sentence implies $p=46$ and $n=24000$, and this is what I asked. Please cross-check with the notations above eq. (1) in page 2.
> (Section 2.2) This paper advances prior research in two ways. Firstly, we extend the closed-form solutions beyond
the acyclic graph structure to encompass any edge corresponding to a bridge.
> (Global response) Our research offers two main theoretical advancements. The first proposes an explicit form for the inverse of $\Theta$.
I feel these statements are too strong. As described in Figure 2, when graph allows block-bridge decomposition, what authors show is the decomposition of the big problem into the smaller subproblems, and closed form solutions for off-blockdiagonal entries (orange entries of Fig. 2). Authors claim that this paper "extend the closed-form solutions beyond
the acyclic graph structure" of Fattahi and Sojoudi (2019, JMLR, ref. 27). I think this may confuse readers, since what authors show is mainly the decomposition, not the explicit solution that is similar to equation (11) of Fattahi and Sojoudi (2019). I believe authors should use terms like "closed from" or "explicit from" with extra caution in this regard, and I strongly suggest to revise sentence "we extend the closed-form solutions beyond the acyclic graph structure to encompass any edge corresponding to a bridge" and other sentences with similar context.
I appreciate authors running additional simulations when $p>>n$. Authors use preferential attachment (Barabasi-albert) graph and stochastic blockmodel in simulation studies to illustrate block-bridge decomposition, which often has many bridges due to its construction. While those graph arises from completely different context (e.g. community detection problems), and I understand this is for the illustrative purposes, but are there any references that using those preferential attachement graph or SBM in the graphical lasso settings like in this paper?
> (Section 2.1) This paper considers estimating the precision matrix $\Theta$ given $n$ **independent and identically distributed**
observations ${y_1, . . . , y_n}$ that follow an MTP2 Gaussian distribution"
> (section 4.3) Though it is not the focus of this paper to discover insights into the estimated MTP2 GGMs for better understanding the inherent nature of the data.
I thank authors to run graphical lasso and present additional real data analysis results (fig.5 and fig. 6). However I am still not sure crop image dataset is best suited for the proposed method. Each node corresponds to each pixel of satellite image, and there are $n=46$ images $y_1,\dots,y_{46}$ which are time-varying measurements, illustrating the temporal evolution of the observed area. This implies data are temporally correlated, not iid. How big is the time interval (montly, yearly)? What is being measured? (pixel greyscale value? please clarify and describe at least minimally, not just referring ref.47) Most importantly, how should we interpret the conditional dependency structure of this real data analysis result?
If estimated MTP2 GMM does not give any further understanding of the crop image data, I believe this data should not be used in the real data analysis.
I have increased my score from 3 to 4 based on the additional results, but still lean to a rejection due to above concerns.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable comments and for raising the score of our paper. We're more than happy to further discuss and address your concerns.
## Reply to Post-Rebuttal Questions Part 1
> The current manuscript reads "Our goal is to perform graph-based clustering for the indexed data $y_1,\dots,y_{24000}$ using MTP2 GGMs, where $ y_i\in\mathbb R^{46}$." This sentence implies $p=46$ and $n=24000$.
__Reply__: Thank you for pointing out the ambiguity in our manuscript. We apologize for any confusion. To clarify, we propose to revise the sentence as follows:
"Our aim is to apply graph-based clustering to the time series data that contains 46 observations, denoted as $\mathbf y_1,\dots, \mathbf y_{46}$, where $\mathbf y_i\in \mathbb R^{24000}$."
This representation aligns with the conventions typically adopted in the realm of graphical models. We hope this modification adequately addresses your concern. We are committed to improving our manuscript based on your feedback.
## Reply to Post-Rebuttal Questions Part 2
> Authors claim that this paper "extend the closed-form solutions beyond the acyclic graph structure" of Fattahi and Sojoudi (2019, JMLR, ref. 27). I think this may confuse readers, since what authors show is mainly the decomposition, not the explicit solution that is similar to equation (11) of Fattahi and Sojoudi (2019).
__Reply__: We deeply appreciate your constructive suggestions. We agree with the reviewer's comment that more caution is needed when using the terms "closed-form" and "explicit solution" to avoid mixing up the results from our paper and the paper of Fattahi and Sojoudi.
To address your concern, we propose to revise the sentence in our manuscript as follows: "While previous studies have offered an explicit solution for $\Theta_{ij}$ in the case of acyclic thresholded graphs, we reveal that the explicit solution for $\Theta_{ij}$ consistently applies to every $(i,j)$ pair acting as a bridge in non-acyclic graphs."
To provide further clarification, it's noteworthy that the JMLR paper outlines an explicit computation method for deriving each entry of the solution associated with the acyclic graph structure. However, our paper illustrates that the solution can be obtained using a decomposed approach based on block-bridge decomposition. In this approach, the entries corresponding to bridges have an explicit computation method, similar to the results presented in the JMLR paper, and the decomposed subproblems can be solved independently. Importantly, our results are more general: the JMLR paper necessitates an acyclic graph structure, which is a specific case in our study. This follows from the fact that all edges in an acyclic graph are bridges, which have explicit solutions as demonstrated in our theory.
Your feedback is highly valuable in helping us improve the clarity and precision of our manuscript. We would make revisions accordingly and ensure a detailed comparison of our results with those of Fattahi and Sojoudi's work in our revised manuscript.
> Are there any references that using those preferential attachement graph or SBM in the graphical lasso settings like in this paper?
__Reply__: Yes, both the Barabási–Albert (BA) and Stochastic Block Models (SBM) are frequently used in the context of graphical models, including graphical lasso settings. Below, we provide several relevant references:
__BA graph:__
1. Liu, H., & Wang, L. (2017). TIGER: A Tuning-Insensitive Approach for Optimally Estimating Gaussian Graphical Models.
2. Ying, J., Cardoso, J. V. de M., & Palomar, D. (2020). Nonconvex Sparse Graph Learning under Laplacian Constrained Graphical Model. Advances in Neural Information Processing Systems, 33, 7101-7113.
3. Li, R., et al. (2023). Graph Learning for Latent-Variable Gaussian Graphical Models under Laplacian Constraints. Neurocomputing, 532, 67-76.
__SBM graph:__
1. Mohan, K., et al. (2014). Node-based learning of multiple Gaussian graphical models. The Journal of Machine Learning Research, 15(1), 445-488.
2. Pircalabelu, E., & Claeskens, G. (2020). Community-Based Group Graphical Lasso. The Journal of Machine Learning Research, 21(1), 2406-2437.
3. Ying, J., Cardoso, J. V. de M., & Palomar, D. P. (2023). Adaptive Estimation of Graphical Models under Total Positivity. International Conference on Machine Learning.
BA models play an important role in network science, which generate random scale-free networks using a preferential attachment mechanism such that new nodes tend to link to nodes that have higher degree in the evolution. Scale-free networks are well-suited to model the Internet, the world wide web, protein interaction networks, citation networks, and most social and online networks. Stochastic block models serve as fundamental tools in network science, creating random networks based on community structures, where nodes within the same group are more likely to form connections.
---
Reply to Comment 1.1.2:
Comment: ## Reply to Post-Rebuttal Questions Part 3
> Each node corresponds to each pixel of satellite image, and there are $n=46$ images $y_1,\dots,y_{46}$ which are time-varying measurements, illustrating the temporal evolution of the observed area. This implies data are temporally correlated, not iid.
__Reply__: Thank you for your insightful comment. We agree with your observation that the nodes, representing each pixel of the satellite image, are indeed temporally correlated due to the time-varying nature of the measurements. This introduces a temporal correlation in the data, which inevitably deviates from the assumption of independent and identically distributed (iid) data.
Despite the inherent temporal correlation in time series data, it's a common practice in the realm of Gaussian graphical models to approximate the data as being independent and identically distributed (i.i.d). This approximation, while not strictly accurate, has been successfully implemented in various studies such as those by Liu et al. (2012), Wang et al. (2020), and Agrawal et al. (2022). These works utilize Gaussian graphical models for financial time series analysis, and despite the fact that stock prices don't perfectly conform to the i.i.d assumption, they are effectively modeled and interpreted within this framework. This demonstrates the practical utility of such models, suggesting that our approach can still provide valuable insights and meaningful results, even when acknowledging the presence of temporal correlation.
It is worth mentioning that estimating time-varying graphical models is indeed an intriguing research direction in the context of time series. Such an approach becomes particularly relevant when there's a need to understand the evolving interactive relationships among a set of random variables. While this specific aspect is not the primary focus of our current paper, we acknowledge its importance and potential for future research.
We greatly appreciate your insightful feedback. We plan to incorporate a more detailed discussion on this matter in our revised manuscript. Should you have any further queries or comments, please do not hesitate to reach out to us.
__Reference__:
-- Liu, H., Han, F., & Zhang, C. (2012). Transelliptical graphical models. Advances in Neural Information Processing Systems, 25.
-- Wang, Y., Roy, U., & Uhler, C. (2020). Learning high-dimensional Gaussian graphical models under total positivity without adjustment of tuning parameters. International Conference on Artificial Intelligence and Statistics. PMLR.
-- Agrawal, R., Roy, U., & Uhler, C. (2022). Covariance matrix estimation under total positivity for portfolio selection. Journal of Financial Econometrics, 20(2), 367-389.
> How big is the time interval (montly, yearly)? What is being measured? (pixel greyscale value? please clarify and describe at least minimally, not just referring ref.47)
__Reply__: Images in the dataset are captured at five-day intervals. The recorded spectral information from these images represents variations in "colours" (referring to different spectral bands) for each pixel over the study period. Hence, each data point in this dataset corresponds to a time series of the spectral changes observed at a specific geographical location over time.
> Most importantly, how should we interpret the conditional dependency structure of this real data analysis result? If estimated MTP2 GMM does not give any further understanding of the crop image data, I believe this data should not be used in the real data analysis.
__Reply__: Thank you for your insightful question.
The estimated graph is statistically meaningful. We observed that the bulk of edges are located within the same type of crop, while the edges between nodes associated with different crops are relatively sparse. This finding is beneficial for clustering processes and aligns with our anticipations, as a stronger positive dependency is often exhibited within the same class, while the dependency among different classes tends to be significantly weaker.
In addition to revealing clustering patterns, our graph can reflect more intricate insights. For example, in Figure 6a of our manuscript, we noticed a substantial density of edges between two crop types, 'temporary meadow' and 'pasture' (colored in gold '#B79F00' and cyan '#00BFC4' respectively), indicating a conditional dependency significantly stronger than those found between other categories. This observation aligns with our perception.
Therefore, the conditional dependency structure that we have inferred possesses the ability to represent the inherent interrelationships among different crops. We believe this provides valuable insight to further understanding of the crop image data. We appreciate your consideration and look forward to any further questions or comments you may have. | Summary: The paper studies the problem of estimating the precision matrix, which is the inverse of the correlation matrix, of a given Gaussian random vector $y$. The precision matrix $\Theta$ is assumed to satisfy a technical condition called MTP2 which states that $\Theta$ is symmetric and $\Theta_{i,j} \le 0$. This seems to be a well motivated assumption from various applications. The contribution of this paper is a technique for estimating $\Theta$ as follows: given a predicted sparsity pattern on $\Theta$ in the form of a graph $G$ , the natural optimization problem for estimating $\Theta$ can be solved by first solving the optimization problem on smaller 'blocks' and combining them across 'bridges'. They are defined as follows. Bridges are single edge cuts in $G$ and the resulting connected vertices are called blocks.
Strengths: The paper shows that given a block bridge decomposition, the optimization problem of estimating $\Theta$ can be efficiently solved by first solving the problem on the individual blocks and then combining the solution across bridges. The paper gives an explicit formula for doing so. In the case where $G$ is sparse, this can represent significant computational savings over estimating the entire precision matrix at once. Furthermore, since the work provides a structural theorem, any optimization algorithm can be used in conjunction with their observation.
Weaknesses: I am not familiar with the literature but it seems like a big assumption to know the threshold graph explicitly. What happens if this graph is unknown? It seems to be more natural that the graph is unknown and one must estimate it.
An intermediate setting which also seems interesting is in the case where we know a noisy approximation to the block bridge structure. How do the proposed methods perform under such noisy information? Are the derived formulas robust?
What is the motivation for even assuming the block bridge structure? I can see social networks being one motivation but it would be more convincing if there were experiments on (real) social networks.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: What happens to the SBM experiments if the edge probabilities between the different blocks are increased? The quality of the method should deteriorate as the probabilities increase, since the block structure increasingly deteriorates. It would be interesting to see how much the proposed method can tolerate.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: No ethical concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Reply to Comments Part 1
> I am not familiar with the literature but it seems like a big assumption to know the threshold graph explicitly. What happens if this graph is unknown? It seems to be more natural that the graph is unknown and one must estimate it.
__Reply:__ We appreciate your feedback and would like to provide further clarification. Our problem is defined such that we already have access to the sample covariance matrix, denoted by $\mathbf{S}$, and the regularization matrix, represented as $\boldsymbol{\Lambda}$.
__Given these matrices__, we can __precisely__ compute the thresholded matrix, $\mathbf{T}$, using the formula $\mathbf{T}=\max(\mathbf{0},\mathbf{S}-\boldsymbol{\Lambda})$. Hence, the knowledge of the thresholded graph isn't an assumption we're imposing, but rather a natural outcome resulting from the specific optimization problem at hand.
## Reply to Comments Part 2
> An intermediate setting which also seems interesting is in the case where we know a noisy approximation to the block bridge structure. How do the proposed methods perform under such noisy information? Are the derived formulas robust?
__Reply:__ Thank you for posing this question. In practical applications, samples inevitably carry noise, making it challenging to obtain an accurate estimate of the sample covariance matrix.
However, regardless of the sample size or the degree of sample noise, we can still apply our proposed method to the noisy sample covariance matrix. This is because the step of computing the bridge-block decomposition from the sample covariance matrix is deterministic, and __we can always obtain an optimal and exact solution__ via our proposed framework. As a result, from an optimization perspective, our method is robust.
## Reply to Comments Part 3
> What is the motivation for even assuming the block bridge structure? I can see social networks being one motivation but it would be more convincing if there were experiments on (real) social networks.
__Reply:__ Thank you for your insightful question. Our motivation in assuming the block bridge structure stems from our focus on sparse graph learning. In sparse graphs, bridge is a commmon feature, and here's why.
One motivating for learning sparse graphs is to enhance interpretability by preserving only the most significant relationships among variables. Each node connects with its most important neighbors, reducing the probabilities for cycle formations. Furthermore, the connectivity in sparse graphs is rather weak due to the limited number of edges, making them easy to separate. As a result, numerous edges in sparse graphs become bridges - these are edges whose removal would create additional components.
In the context of social networks, the stochastic block model is a common tool for representing such networks. Notably, social networks often exhibit strong intra-group connections and weaker inter-group ties. This structure may potentially harbor a considerable number of bridges.
While we didn't have the opportunity to apply our methods to social network learning due to time limit, we believe there is potential merit that our methods could contribute to the more efficient learning of these networks.
## Answer to Questions
> What happens to the SBM experiments if the edge probabilities between the different blocks are increased? The quality of the method should deteriorate as the probabilities increase, since the block structure increasingly deteriorates. It would be interesting to see how much the proposed method can tolerate.
__Reply:__ We appreciate your highlighting of this practical issue. We concur that the effectiveness of the method might decline as the graph grows denser.
In response to your queries, we conducted additional experiments to examine the extent to which we can speed up the convergence of the BCD method for various values of $p'$. Here, $p'$ represents the probability of forming an edge $(i,j)$, where $i$ and $j$ are any two distinct nodes in neighboring communities.
We utilized a SBM graph with $1500$ nodes for this study. The results are illustrated in Figure 2 of the attached PDF and are also encapsulated in the table below:
|Edge formation probability|0.05|0.10|0.15|0.20|0.25|0.30|0.35|0.40|0.45|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| Ratio of Improvement |$179.7$|$150.1$|$80.1$|$60.1$| $31.1$|$16.6$|$3.6$|$3.5$|$3.2$|
Here, Ratio of Improvement refers to how many times we can accelerate the convergence of BCD methods. As expected, a rise in $p'$ increases the chance of multiple edges linking the blocks, which can potentially hamper the efficiency of our approach.
As we address in Section C of our Appendix, while our method is primarily intended for sparsely connected graphs, it retains its usefulness even for dense graphs. Our method can serve as an approximate solution or provide a warm start for other numerical algorithms, thereby boosting their computational efficiency.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for responding to the review. It seems that knowing $\Lambda$ is still an assumption. For now I will maintain my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We appreciate your feedback.
In practice, the regularization matrix $\boldsymbol{\Lambda}$ can be effectively computed based on certain initial estimates. In our paper, we suggest the use of Nie et al.'s (2016) method for efficiently deriving these initial estimates.
__Reference__:
— Feiping Nie, Xiaoqian Wang, Michael Jordan, and Heng Huang. The constrained Laplacian rank algorithm for graph-based clustering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016. | Summary: This paper studies the problem of learning Gaussian Graphical Models (GGMs) satisfying a certain positive associativity condition among the variables, namely that the precision matrix has nonnegative off-diagonal elements. This condition is known as being "multivariate totally positive of order two", or MTP$_2$, and has applications in ML (where it corresponds to attractive Markov random fields), finance, and more.
MTP$_2$ GGMs come with the benefit that the traditional optimization procedure used to estimate the precision matrix of a GGM, namely the graphical Lasso, takes on a particularly simple and smooth form. Prior work had shown various polynomial-time convergence guarantees for the graphical Lasso under the MTP$_2$ assumption, but these are still not very suitable for practical applications (scaling with the dimension $p$ as $O(p^3)$ or $O(p^4)$). Other prior work had shown a closed-form solution for the graphical Lasso under the assumption that the "thresholded sample covariance graph" (an object defined in terms of the sample covariance and the regularization parameters used in the graphical Lasso), or thresholded graph for short, is acyclic.
The main contribution of this paper is essentially to generalize the latter closed-form result in terms of the "bridge-block decomposition" of the thresholded graph. The bridge-block decomposition of a graph is essentially a partition of the graph into components connected only by "bridge" edges. Formally, a bridge edge is an edge such that deleting it increases the number of connected components in a graph; it is effectively "the only edge" bridging two different components (see Fig 1). The paper's main theorem (Thm 3.3) essentially says the following: to solve the graphical Lasso for an MTP$_2$ GGM, compute the bridge-block decomposition of the thresholded graph, run the graphical Lasso for each component separately, and stitch them together using a simple closed-form formula. Moreover, this theorem also readily recovers as a special case the prior closed-form result for acyclic thresholded graphs. This is because in an acyclic graph, every edge is a bridge, and the bridge-block decomposition is particularly simple.
Thus the main result amounts to a divide-and-conquer recipe for learning MTP$_2$ GGMs, and the authors show various numerical experiments suggesting the practical superiority of this method over all prior methods (which operate on the entire graph). A key benefit is that the subproblems may be solved using any graphical Lasso implementation whatsoever, and potentially in parallel.
Strengths: Disclaimer: I am not very familiar with the literature in this area, and my review should be taken as that of a relative outsider.
The paper's main result is both an interesting structural result about MTP$_2$ GGMs as well as a genuinely practical algorithmic advance in learning such models. From a conceptual point of view, the idea of leveraging the bridge-block decomposition seems novel. The overall result seems like a useful and nice contribution to the literature on this problem, and to the extent that one considers MTP$_2$ GGMs significant, one should consider this result significant as well.
The paper is largely clear and easy to follow (modulo some occasionally confusing bits; see the Questions section). It does a good job of setting up the main problem as well as the necessary context. I did not manage to verify the proofs in detail, but they seemed fairly clean, relying on an analysis of the KKT conditions of the graphical Lasso as well as some clever algebraic manipulation.
Weaknesses: I think the main things to really evaluate about this paper are its novelty and significance. As an outsider to this area, I find this hard to accurately gauge, but I think the paper scores well on these fronts.
I do think the paper could benefit from a better conceptual overview of the main proof and the role of the bridge-block decomposition. The context and benefits are discussed adequately, but the key ideas in the proof do not come through very well, and the main proof seemed slightly magical to me. Why would one have expected the bridge-block decomposition to help? Was its role surprising?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I think the biggest question I have is about the conceptual role of the bridge-block decomposition, as explained above. A few other questions:
- One of the key parts of the eventual proof of Thm 3.3 is Lemma A.2, which is simply described as following from the KKT conditions. Presumably the authors mean the KKT conditions associated with Problem (5)? Since it plays a relatively important conceptual role, I feel this could definitely use more elaboration.
- What exactly is the time required to compute the bridge-block decomposition? This is described as negligible in Sec 3.1, but is it e.g. $O(p^2)$? Similarly, what about the quantities used in the closed-form formula? Currently the only discussion of the overall asymptotic running time appears in the first bullet of the "Proposed Framework" list on page 5.
- In the experimental section, it would be helpful to say more about it is natural to synthesize/define $\Theta$ and $\Lambda$ in the specific way described in the first two paragraphs of Section 4.1, especially for readers who are unfamiliar with the prior work mentioned in those paragraphs.
A couple other nits regarding the presentation:
- It is not immediately obvious how Corollary 3.5 follows from Thm 3.3, and this could use a line of explanation.
- The notation $\mathbb{S}^p$ is not formally defined in the paper, and MTP$_2$'s expansion is only mentioned in the abstract.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper could use a couple additional lines in the final section about the technical limitations of this work and what the major next steps could be. I am not aware of any significant potential negative societal impact of this theoretical work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Answer to Questions Part 1
> One of the key parts of the eventual proof of Thm 3.3 is Lemma A.2, which is simply described as following from the KKT conditions. Presumably the authors mean the KKT conditions associated with Problem (5)?
__Reply:__ Thank you for your insightful comment. Here, we elaborate on their relationship.
Let's denote $\Gamma_{ij}$ as the dual variables associated with the constraints $\Theta_{ij}\leq 0$. With $\mathbf{R}=\boldsymbol{\Theta}^{-1}$, the KKT conditions include (1) $-\mathbf{R}+\mathbf{S}-\boldsymbol{\Lambda}+\boldsymbol{\Gamma}=\mathbf{0}$, (2) $\Theta_{ij}\leq0,\forall i\neq j,$ (3) $\Gamma_{ij}\geq0,\forall i\neq j,$ (4) $\Theta_{ij}\cdot\Gamma_{ij}=0,\forall i\neq j.$
We can eliminate the dual variables as follows:
1. For $\Theta_{ij}<0$, the complementary slackness leads to $\Gamma_{ij}=0$, which implies $-R_{ij}+S_{ij}-\Lambda_{ij}=0$;
2. When $\Theta_{ij}=0$, we have $\Gamma_{ij}=R_{ij}-S_{ij}+\Lambda_{ij}\geq 0$, which indicates that $-R_{ij}+S_{ij}-\Lambda_{ij}\leq 0$.
Following these steps, we arrive at the optimal conditions for original problem. As the sub-problems have the same form with the original problem, Lemma A.2 holds accordingly. We will incorporate these details in the revised version.
## Answer to Questions Part 2
> What exactly is the time required to compute the bridge-block decomposition? Is it e.g. O(p^2)? What about the quantities used in the closed-form formula?
__Reply:__ We sincerely appreciate the reviewer's question. Please find our detailed answer in the Part 3 of our global response.
## Answer to Questions Part 3
> In the experimental section, it would be helpful to say more about it is natural to synthesize/define $\boldsymbol{\Theta}$ and $\boldsymbol{\Lambda}$.
__Reply:__ We appreciate your valuable recommendation and will revise our paper accordingly. The methods we use to synthesize $\boldsymbol{\Theta}$ and $\boldsymbol{\Lambda}$ are guided by [1] and can be explained as follows.
We synthesize $\boldsymbol{\Theta}$ as $\boldsymbol{\Theta}=1.05\cdot\lambda_{\max}(\mathbf{A})\cdot\mathbf{I}-\mathbf{A}$, where $\mathbf{A}$ is the adjacency matrix of the underlying graph. This ensures that $\boldsymbol{\Theta}$ is a positive definite matrix with off-diagonal elements being negative, making $\boldsymbol{\Theta}$ a randomly generated M-matrix. Then, we normalize $\boldsymbol{\Theta}^{-1}$, thereby deriving a randomly generated correlation matrix.
Following this, we set $\Lambda_{ij}=\chi\big/(\epsilon+\Theta_{ij}^{(0)})$. Here, a high penalty is expected on $\Theta_{ij}$ if its initial estimate $\Theta_{ij}^{(0)}$ is small. By applying this way, we can recover the underlying structure by selecting an appropriate value of $\chi$.
[1] Martin Slawski and Matthias Hein. Estimation of positive definite M-matrices and structure learning for attractive Gaussian markov random fields. Linear Algebra and its Applications, 473:145–179, 2015.
## Answer to Questions Part 4
> It is not immediately obvious how Corollary 3.5 follows from Thm 3.3.
__Reply:__ We appreciate your valuable suggestion. Here's a clarification on why a bridge $(i,j)$ in the thresholded graph will persist as a bridge in the optimal graph.
An edge $(i,j)$ is a bridge if and only if there exists exactly one unique path connecting nodes $i$ to $j$, denoted as $d_{ij}=\{(i,j)\}$. According to the definition of a bridge, the removal of it would lead to an increment of graph's components. The presence of any additional paths would contradict this definition.
As the optimal graph is a subset of the thresholded graph and $\Theta_{ij}\neq 0$, we find that $d_{ij}=\{(i,j)\}$ continues to be the unique path from nodes $i$ to $j$, which indicates that $(i,j)$ remains a bridge.
## Answer to Questions Part 5
> The notation $\mathbb{S}^p$ is not formally defined in the paper, and MTP$_2$'s expansion is only mentioned in the abstract.
__Reply:__ We appreciate your keen observation that calls for clarity. The symbol $\mathbb{S}^p$ represents the set of symmetric matrices with a dimension of $p\times p$. We will incorporate the full form of MTP2 into the body of the paper.
## Reply to Comments
> I do think the paper could benefit from a better conceptual overview of the main proof and the role of the bridge-block decomposition. Why would one have expected the bridge-block decomposition to help? Was its role surprising?
__Reply:__ We sincerely value your comprehensive review of our paper and your engagement with the core concepts underlying our proof.
Our research is influenced by the study of (Fattahi et al., 2019), which indicates that a closed-form solution exists for large-scale graphical lasso problems. Their intriguing findings hold potential for large-scale data sets, yet their theorem imposes some hard-to-validate conditions and requires the thresholded graph to be acyclic.
In the realm of the MTP2 graph learning problem that interests us, we aimed to conduct related research and unearth similar findings. Our investigations led us to discover that (1) the existence of a closed-form solution hinges not on the acyclicity of the thresholded graph, but on whether an edge is a bridge; (2) Given that bridges provide closed-form solutions, if our aim is to decompose the problem, we should form the sub-problem excluding the bridges, giving birth to our bridge-block decomposition strategy; (3) By referencing the proofs in (Fattahi et al., 2019), we eventually prove that all conditions that are typically difficult to verify are naturally satisfied in MTP2 graphical models.
As our approach does not impose additional conditions, and it does not require the thresholded graph to be acyclic, our method has a wider range of applicability.
-- Salar Fattahi and Somayeh Sojoudi. Graphical lasso and thresholding: Equivalence and closed-form solutions. Journal of Machine Learning Research, 2019.
---
Rebuttal Comment 1.1:
Comment: I am satisfied with the responses, thank you. It would be great to incorporate some of this into the final revision.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We sincerely appreciate the time and effort you've dedicated to reviewing our paper. Your constructive comments have been instrumental in refining our paper.
Thank you once again for your invaluable input and support.
Best regards,
The Authors | Rebuttal 1:
Rebuttal: ## Part 1: Answers to Questions Regarding Roles of MTP2
> We gather this question from Reviewers ghhX and pk8s, who sought the role of the MTP2 constraint in our main results and why these only apply to MTP2 distributions.
__Reply__: We thank the reviewers for bringing up this interesting point. Our research offers two main theoretical advancements. The first proposes an explicit form for the inverse of $\boldsymbol{\Theta}$, while the second confirms that $\boldsymbol{\Theta}=\mathbf{R}^{-1}$ meets optimality without extra conditions.
__The MTP2 properties are sufficient conditions for the second contribution to hold__. Hence, while the bridge-block decomposed form of $\mathbf{R}$ could be broadly applied to other graphical models, our main findings are only applicable to MTP2 graphical models.
Technically speaking, the MTP2 constraints eliminate the non-smoothness, thus __simplifying the KKT conditions__ as follows:
||Graphical Lasso|MTP2|
|:-|:-|:-|
|$\forall i$| $-R_{ii}+S_{ii}=0,$|$-R_{ii}+S_{ii}=0,$|
|$\forall\Theta_{ij}\neq0$|$-R_{ij}+S_{ij}+\lambda_{ij}\text{sign}\left(\Theta_{ij}\right)=0,$| $-R_{ij}+S_{ij}-\lambda_{ij}=0$|
|$\forall\Theta_{ij}=0$|$\vert -R_{ij}+S_{ij}\vert\leq \lambda_{ij}$|$-R_{ij}+S_{ij}-\lambda_{ij}\leq0$|
Simultaneously, $\mathbf{R}$ becomes non-negative ($\geq\boldsymbol{0}$). As a result, The most challenging part $-R_{ij}+S_{ij}-\lambda_{ij}\leq 0$ of the KKT conditions holds under MTP2 properties. (The details refers to Section B of our Appendix.)
## Part 2: Answer to Questions Regarding Generalizing Our Results to Graphical Lasso
> Reviewers ghhX and pk8s provided feedback regarding the applicability of our findings to the graphical lasso.
__Reply__: We thank reviewers posing this thought-provoking question. In the context of the graphical lasso, our current, yet-to-be-published research indicates that our decomposed form achieves optimality if the following inequality is satisfied:
$| -R_{ij} +S_{ij}|\leq \lambda_{ij},\quad\forall (i,j)\notin \mathcal B\text{ and }i,j \text{ belong to different clusters,}$
where $\mathbf{R}$ is computed by Theorem 3.4. This condition could be easily established in certain cases, such as when $\lambda_{ij}\gg 1$. Nevertheless, in the majority of practical situations, verifying this condition poses a substantial challenge.
Conversely, these conditions are inherently satisfied when MTP2 constraints are applied. We hope that our theorem will instigate future studies to simplify these conditions for the graphical lasso.
## Part 3: Answers to Questions Regarding the Exact Procedures of Proposed Framework
> We receive this question from Reviewers vWNe and pk8s, who seek to comprehend the exact procedures and processing time of the proposed method.
__Reply__: We'll use an SBM graph with p=5000 as a reference. The specifics are outlined in the following:
__Preprocessing__: In this phase, we have three steps:
|Step|Elements to Compute|Time (s)|
|:-:|:-|:-:|
|1| Derive the thresholded matrix $\mathbf{T}$ from $\mathbf{S}$ and $\boldsymbol{\Lambda}$ |0.01|
|2|Extract the set of all bridges $\mathcal B$ from the thresholded graph |0.21|
|3|Determine the bridge-block decomposition|0.12|
__Solving Subproblems__: For all clusters, we solve the associated sub-problems.
__Computing Optimal Solution via Theorem 3.3__: The optimal $\boldsymbol{\Theta}$ is then derived as follows:
|Conditions|Formulas|Cost|
|:-|:-|:-|
|$i,j\in \mathcal V_k$| Derive $\Theta_{ij}$ from $\widehat{\boldsymbol{\Theta}}_k$|Depends on specific algorithms (BCD: 967s; FPN: 101s).|
|$(i,j)\in\mathcal B$|$\Theta_{ij}= -T_{ij}/(S_{ii}S_{jj}-T_{ij}^2)$|0.12s|
|otherwise|$\Theta_{ij}=0$|0s|
The findings reveal that the additional costs, such as the time for pre-processing and the employment of Theorem 3.3, are considerably less compared to the time invested in solving sub-problems.
We find bridges using a bridge-finding algorithm from [1]. This method uses a depth-first search, which means the complexity is $\mathcal{O}(|\mathcal V| +|\mathcal E|)$. In the sparse graphs we're interested in, the number of edges $|\mathcal E|$ usually scales similarly to the number of nodes $|\mathcal V|$. Hence, the computational cost of bridges for high-dimensional sparse graphs is low.
[1] R Endre Tarjan. A note on finding the bridges of a graph. Information Processing Letters, 2(6):160–161, 1974.
## Part 4: Experiments on Testing MTP2 in CROP Dataset
> In our paper, we mainly focus on how proposed method acclerate the learning of MTP2 graphical models on CROP Data set. It is suggested by the reviewer ghhX that we should also justify why MTP2 constraint is plausible in this problem. To address this, we present additional experiments to tackle this concern.
__Reply__: We selected 20 random subsets from the CROP dataset. For each subset, we computed the Graphical Lasso and the MTP2 graphical model for different values of $\lambda$ using the first 10 observations. The remaining 36 observations were used to calculate the out-of-sample log-likelihood, which was then averaged across all datasets. This process allows us to evaluate how well these models generalize to unseen data.
As depicted in Figure 5 of the attached PDF, the MTP2 graphical model outperforms the Graphical Lasso, providing a higher test log-likelihood. We present one instance of the estimated graphical lasso model in Figure 6. It reveals that __most conditional correlations are positive__ (red edges, 90%), with a few being negative (blue edges, 10%). This pattern implies strong positive dependence in the CROP data, aligning with the characteristics of MTP2.
These results are not unexpected, given that the CROP dataset comprises multiple clusters. Within the same cluster, we expect data points to exhibit greater similarity compared to those in different clusters. This situation signifies a form of positive dependence, thereby justifying the plausibility of the MTP2 assumption in this problem.
Pdf: /pdf/c35690922d7febcc3cc040ffd39da244108621cc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper focuses on the problem of learning large-scale Gaussian graphical models (GGMs) that are multivariate totally positive of order two (MTP2). The high-dimensional, sparse MTP2 GGMs are not easily manageable due to their size and complexity. The authors propose a novel approach, introducing the concept of a "bridge", to optimize the entire problem into several smaller, more manageable sub-problems and a set of closed-form solutions. The approach is based on the bridge-block decomposition of the thresholded sample covariance graph, which leads to reductions in computational complexity and improvements in existing algorithms.
Strengths: The proposed bridge-block decomposition framework on Gaussian graphical models seems novel. The problem is motivated nicely, and according to the authors, the framework could significantly reduce computational and memory cost.
The proposed method seems to subsume various network structures, including the BA graph and the SBM, which are common models used in network analysis.
Experimental results are provided, and the computational results look promising.
Weaknesses: As the authors mentioned in the paper, the proposed method might not generalize to dense cases. Still I feel in many settings like BA and SBM, sparsity is a reasonable assumption.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is there any way to characterize the effect of the proposed approach in terms of some global graphical properties, for example, the edge expansion? Asking this because bridge is related.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See Above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Answer to Questions
> Is there any way to characterize the effect of the proposed approach in terms of some global graphical properties, for example, the edge expansion? Asking this because bridge is related.
__Reply:__ We appreciate the reviewer's interesting and insightful question. Indeed, certain global graphical properties could potentially serve as indicators of a graph's connectivity strength, thereby providing a measure of our method's effectiveness. However, the computation of edge expansion presents a significant challenge, especially for high-dimensional graphs.
As an alternative, we opted to utilize another global graphical property known as __algebraic connectivity__. This metric, the second smallest eigenvalue of a graph's Laplacian matrix, offers a depiction of the graph's overall connectivity. It can be employed to characterize the impact of our proposed solution.
To demonstrate this, we executed tests on two random SBM graphs with distinct algebraic connectivities. We evaluate the how many times our proposed method can accelerate the convergence of existing algorithms. The corresponding results are displayed in the table below:
|Algebraic connectivity |BCD|PGD|FPN|PQN-LBFGS|
|:-:|:-:|:-:|:-:|:-:|
| 4e-5 |$2064$ | $314$ |$213$ |$57$ |
| 4e-4 |$26$ | $9$ |$13$ | $5$ |
It is obvious that our proposed method is much more effective on graph with low algebraic connectivity.
Details regarding convergence (refer to Figure 3) and more in-depth analyses (refer to Figure 1) are available in the attached PDF. In detail, we generate numerous random graphs and for each, we examine the relationship between its algebraic connectivity and a theoretical number of how many time we can accelerate a BCD method. In Figure 1, each point plotted represents a specific graph, with the x-axis indicating the algebraic connectivity and the y-axis representing the acceleration factor.
__Conclusion__: The results indicate a general trend: __as the algebraic connectivity decreases, the effectiveness of our proposed method enhances.__
Remarks: The theoretical number of acceleration is computed as follows. Let's assume that a problem with dimension $p$ can be addressed by BCD algorithm in $c\cdot p^4$ seconds, where $c$ is a constant. By implementing the bridge-block decomposition, this cost is reduced to $\sum_kc\cdot p_k^4$, with $p_k$ denoting the size of the $k$-th cluster. Therefore, we define the ratio $p^4\big /\sum_k p_k^4$ as the theoretical number by which our method can speed up the BCD algorithm.
## Reply to Comments
> The proposed method seems to subsume various network structures, including the BA graph and the SBM, which are common models used in network analysis. As the authors mentioned in the paper, the proposed method might not generalize to dense cases. Still I feel in many settings like BA and SBM, sparsity is a reasonable assumption.
__Reply:__ We appreciate the reviewer's comment and acknowledge that there are numerous applications beyond the scope of this paper where the graph is dense.
In response to this, we have added Section C in our Appendix, which discusses various alternative strategies for integrating our proposed method into the learning of dense MTP2 graphical models. For instance, though our decomposition formula only yields exact solutions for sparse graphs via bridge-block decomposition, it could be regarded as __an approximate solution__ for dense graphs. Additionally, the explicitly decomposed form could serve as an __effective warm start__ for other numerical algorithms. | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.